When Charanarravindaa Suriess began experimenting with computers at the age of four, he was not thinking about artificial intelligence, cybersecurity or global regulatory frameworks. He was, by his own account, simply drawn to building things such as websites, apps, and occasionally tools to “make fun of my teachers.”
Today, as a recent A-Levels graduate from Sunway College Johor Bahru, he is the chief executive and chief technology officer of Cortexa Labs, a startup attempting to tackle one of the most urgent and under-addressed problems in artificial intelligence: how to make models resilient against attack.
“The idea that you could take something that didn’t exist and make it real, that’s what kept me building,” Suriess said. “But the turning point was realising I didn’t need to wait for permission to start.”
The origins of Cortexa Labs can be traced back to a hackathon at Nanyang Technological University in Singapore, where Suriess set out to explore machine learning more seriously.
He didn’t win. He didn’t even finish.
But the idea he presented, a system that could identify where an AI model fails and then use those failures to make it stronger caught the attention of professors, who encouraged him to publish it anyway.
“That question stuck with me: what would it look like to build a system that systematically finds where a model breaks, and then strengthens it?” he said. “That became Robusto.”

Robusto is now Cortexa Labs’ flagship product, a platform designed to stress-test AI systems through simulated attacks, identify vulnerabilities and retrain models using targeted synthetic data.
The Blind Spots Problem
As artificial intelligence systems are deployed across industries from finance to healthcare to autonomous systems, their vulnerabilities are becoming increasingly apparent.
AI models, Suriess argues, are not flawed because they are poorly built. They are flawed because they are incomplete.
“Every model is trained on what it has seen,” he said. “Attackers exploit everything it hasn’t.”
In financial services, for example, fraud detection systems often struggle because fraudulent transactions are rare, leaving models underexposed to the very patterns they need to detect. In computer vision systems, adversarial attacks such as specially designed clothing that confuses image classifiers can cause systems to misidentify or ignore threats entirely.
Even large language models have shown susceptibility to so-called “jailbreaking” prompts, which can expose sensitive data or override safeguards.
“No model is attack-proof,” Suriess said. “The goal isn’t invulnerability. It’s understanding your weaknesses and systematically reducing them.”
Turning Weakness Into Data
Cortexa Labs’ approach borrows from cybersecurity practices, particularly “red-teaming,” where systems are intentionally attacked to expose weaknesses.
But instead of stopping at detection, Robusto attempts to close the loop.
The platform generates adversarial data tailored to the model’s specific blind spots and uses it to retrain the system, improving its ability to generalise against future attacks.
“It’s not a one-time fix,” Suriess said. “Attackers evolve, so the defence has to evolve as well.”
The company has attracted early interest from fintech firms, where the stakes of model failure are particularly high, as well as investors including Fusen World, a venture capital firm based in Georgia.
Beyond security, Suriess sees a deeper structural problem holding back AI: the lack of high-quality, domain-specific data.
Large enterprises often guard proprietary datasets, while startups rely on public data that may be incomplete or imbalanced. Synthetic data has emerged as a solution, but current approaches often prioritise visual realism over physical or causal accuracy.
“Looking realistic and being realistic are not the same thing,” he said.
Cortexa Labs’ longer-term vision involves generating synthetic data that adheres to real-world constraints — an ambition that intersects with emerging research into so-called “world models,” which aim to teach AI systems the underlying rules of physical reality.
A Startup, Not a Lab
He also made a deliberate choice to pursue this work as a startup rather than remain in academia.
“Research that never leaves a paper changes nothing,” he said. “A startup lets you take something that works in theory and put it into the hands of people who actually need it.”
The company plans to expand into sectors beyond finance, including robotics, autonomous vehicles and surveillance systems as well as industries where AI failures could have significant real-world consequences.
However, despite of the growing awareness of AI risks, selling security remains a challenge.
Many organisations underestimate their exposure, assume they can build solutions internally, or treat testing as a one-time compliance exercise rather than an ongoing necessity.
“There’s still a mindset that you test once, tick the box and move on,” Suriess said. “But every model update creates new vulnerabilities.”
To address this, Cortexa Labs has developed what it calls the “Model Arena,” a public platform where companies can submit AI models for adversarial testing and benchmarking while turning security into both a competitive and reputational metric.
A Long-Term Bet on AI Safety Standards
Suriess’s ambitions extend beyond building a successful company. He envisions Robusto becoming a global standard for AI robustness, a benchmark used by enterprises and regulators alike to certify whether models are safe to deploy.
“The measure of success isn’t how many people talk about it,” he said. “It’s how many models are actually hardened using it.”
He also hopes to contribute to a broader ecosystem of researchers and engineers working on AI safety, through open-source tools and collaborative research.
“The problem is too big for any one company,” he said. “If we do this right, we’re not just building a product. We’re helping build a field.”
For now, Mr. Suriess remains at the beginning of that journey.
His path into entrepreneurship has been shaped as much by informal networks, hackathons, online communities and mentors, as well as formal education. A former physics teacher, now his research partner, helped refine the technical foundations of his work, while a co-founder brought commercial direction to the project.
“The collaboration works because each person brings something you can’t replace,” he said.
Five years from now, Suriess hopes Cortexa Labs will have established itself at the centre of how AI systems are evaluated and secured.
But his ambitions are not limited to commercial success.
“The real question is whether we’ve made AI systems meaningfully safer,” he said. “Whether fewer systems fail in ways that matter.”
In a world increasingly shaped by artificial intelligence, that may prove to be one of the most consequential questions of all.
