The Bavarian startup is targeting insurance fraud first, and sees Europe’s push for explainable AI as a competitive edge
Neuramancer AI Solutions GmbH has closed a €1.7 million pre-seed funding round to accelerate the commercialisation of its deepfake detection platform, with an initial focus on the insurance industry.
The Bavarian startup, which rebranded from Neuraforge, was founded on the observation that AI-generated media manipulation is not a hypothetical risk.
The German Insurance Association (GdV) has documented billions of euros in annual damages from insurance fraud, a figure the association says is rising as generative AI makes it trivially easy to alter damage photographs and manipulate video calls.
Neuramancer is betting that the harder the problem gets, the more valuable the solution becomes.
The round is led by Vanagon Ventures, with participation from Bayern Kapital, investing through its Innovationsfonds EFRE II, Nuremberg-based ZOHO.VC, and family office Lightfield Equity.
Senior executives from financial services and big tech, as well as experienced platform founders, have also joined as business angels.
What distinguishes Neuramancer’s approach, the company argues, is forensic depth rather than pattern matching. Its detection system analyses statistical irregularities in image and video noise, focusing on structural artefacts rather than semantic content.
The company says this allows it to catch manipulations that conventional AI-based detectors miss, including edits made with the latest generative models, and to produce forensic analysis reports that show investigators not just whether media has been altered, but where and how.
“While many providers rely on intransparent black-box models, we pursue a scientifically grounded, fully transparent approach,” said co-founder Anika Gruner.
“European, explainable AI will become a strategic competitive advantage for companies that need to protect themselves against synthetic manipulation.”
The transparency argument is pointed. As regulatory requirements for auditable AI systems intensify across the EU, particularly under the AI Act and sector-specific frameworks, Neuramancer is positioning explainability not just as a feature but as a compliance advantage.
Insurance firms evaluating fraud prevention tools will increasingly face pressure to demonstrate that their detection methods are legible to regulators, and to courts.
The new capital will fund platform development, team expansion, and market entry, starting with the German insurance sector before broader commercialisation.
Neuramancer is entering a market that did not exist at meaningful scale until generative AI matured, which is both its opportunity and its constraint. Detection tools must keep pace with generation tools in a race that shows no sign of slowing.


