

The AI Working Group, established under Victim Support Europe (VSE), is dedicated to exploring how advancements in technology, particularly artificial intelligence, can be responsibly and effectively integrated into victim support services. With representatives from across Europe, including the Netherlands, Portugal, Finland, and the UK, the group serves as a collaborative platform for sharing knowledge, innovations, and ethical practices in tech-based victim care.
On Thursday, April 10th, the AI Working Group convened for its latest session, featuring a compelling presentation by Saara Huhanantti, Director of Project 5/5 from Finland. The session focused on strengthening generic victim support services by learning from successful applications of AI in youth mental health.
Project 5/5 aims to explore how technology and AI can enhance support services, particularly for young people, while also creating scalable models for other non-profit organisations. A central example shared during the meeting was the Sekasin chat service—Finland’s most widely used low-threshold mental health platform for youth aged 12 to 29.
Launched in 2016, Sekasin chat offers free, anonymous, and confidential one-on-one conversations with either professional counsellors or trained volunteers, operating every day of the year. In response to growing demand, the service has developed the Sekasin AI Assistant—a chatbot tailored to address youth mental health concerns.
The AI assistant helps bridge support gaps in critical moments: when users are waiting for a human counsellor, when live support is unavailable, or when young people wish to test the waters before committing to a conversation with a person. It provides information, self-help tips, and guidance on managing distressing feelings—all while reinforcing users’ autonomy and privacy.
Ethical safeguards are at the heart of Sekasin’s digital model. The AI is trained to identify potentially dangerous situations, such as risks of self-harm, and the platform ensures all data is processed solely within the EU, without long-term storage. Moreover, users are clearly informed when they are interacting with AI rather than a human, maintaining transparency and trust.
The presentation offered an insightful example of how AI can serve not as a replacement for human support, but as a meaningful, ethical complement to existing services. It sparked rich discussions among participants on how similar models might be adapted across the EU to support victims of all crimes.