CEO and Law Firm Partner Ilia Kolochenko Talks About the Intersection of Legal and Cybersecurity
October 29, 2025
Ilia Kolochenko is CEO of application security company ImmuniWeb and Partner and Cybersecurity Practice Lead at Platt Law LLP in Washington, D.C. With legal credentials across the UK, US, and Europe, he brings a rare global perspective to cybersecurity and legal tech. Kolochenko specializes in data protection, cybercrime, and AI in legal practice.
In this exclusive interview, Ilia Kolochenko shares his unique perspective as both the chief executive of application security company ImmuniWeb and the Partner and Cybersecurity Practice Lead at Platt Law LLP in Washington, D.C. Our conversation focuses on AI, data protection and the intersection of legal and cybersecurity.
You began your career as a penetration tester in Switzerland and later built a career as a CEO in cybersecurity, but then chose to also become a practicing attorney. What drove you to combine technical expertise with legal practice, and how do these dual roles strengthen each other?
Ilia Kolochenko: I’ve always been curious about law, about how things work, how they might work, and how they should work. From early on, I believed cybersecurity and legal practice would increasingly intersect, and that’s exactly what we’re seeing now in 2025.
I first noticed this when I was involved in PCI DSS (Payment Card Industry Data Security Standard) enforcement back in 2008. It wasn’t a law, but it was mandatory for anyone processing credit card payments, with financial penalties for non-compliance. Many clients told me, “Ilia, this isn’t really about security; it’s about passing the audit.” While I believed security should matter more than paperwork, they were focused on proving compliance to avoid business disruption. That was when I realized that legislation or regulation can hold immense power in shaping cybersecurity priorities.
Later, as GDPR (General Data Protection Regulation) loomed, the same mindset surfaced. Organizations said, “We need to improve cybersecurity to be GDPR-compliant.” I found myself wondering: are we protecting clients and preventing breaches, or just ticking boxes for regulators? The honest answer from most was the latter. That was a pivotal moment for me. It showed how law can drive real change in cybersecurity.
That realization inspired me to pursue a law degree, and I quickly fell in love with the field. I also saw parallels between the two disciplines. In penetration testing, you look for weak points to break in; in contract review, you look for ambiguities or oversights that could be exploited later. I think of it as “contract auditing,” although most lawyers refer to it as review or analysis.
Working at this intersection has been a rewarding experience. Each discipline sparks new insights in the other: law informs my approach to cybersecurity, and cybersecurity sharpens my legal analysis. I wish there were more hours in the day to explore every new development, but I’ve learned to focus on the areas where the overlap matters most.
AI regulation and data privacy laws are evolving rapidly across the European Union, United Kingdom, and United States, sometimes in conflicting directions. How should organizations navigate these frameworks to remain compliant without stifling innovation, and what role do lawyers play in guiding that balance?
Ilia Kolochenko: The key is weighing the potential penalties for non-compliance against the risks and costs of full compliance. Transparency is essential here. An ethical duty of every lawyer, both in general and under professional conduct rules, is to ensure clients clearly understand their responsibilities, potential liabilities, and the consequences of their choices.
Each business is different. For some large, well-established companies, a fine from a regulator like the FTC or an EU data protection agency may not be a serious concern. They have long-term contracts and stable customer bases, so privacy issues are less likely to disrupt operations. But for online platforms with millions of individual users, privacy is a strategic advantage. A single negative story, true or not, can go viral and cause reputational damage so severe that the business may never recover. Competitors may even exploit the moment with rumors on social media. That’s why compliance decisions can’t be made in a vacuum; they must take into account business realities.
This is where I see differences between legal systems. In countries like Switzerland, one becomes a lawyer after years of legal study and training, but often without significant real-world business experience. Advice tends to be black and white: do this, don’t do that. In the US, however, many lawyers enter the profession with backgrounds in fields such as business, psychology, or even cybersecurity. That diversity gives them a broader perspective and, often, a strategic edge in advising clients.
Of course, lawyers must never downplay risks. Regulators can be unpredictable, and ignoring certain laws could amount to malpractice. However, prioritizing compliance areas and tailoring advice to business needs can help clients avoid unnecessary harm. I’ve seen thriving companies nearly destroyed by lawyers who gave technically correct but commercially blind advice: “Stop everything or face massive liability.” That approach may be legally sound but practically disastrous.
That’s why I continually expand my own knowledge, whether in cybersecurity or my second doctorate degree in forensic sciences. A purely formalistic legal approach, even if impeccable on paper, can be deeply damaging in practice. Clients need lawyers who can connect legal obligations with business realities.
Cyberattacks targeting sensitive data are becoming more sophisticated. From your perspective as both a security professional and attorney, what common mistakes do organizations make in handling data breaches that expose them to greater legal liability?
Ilia Kolochenko: In my view, most breaches come down to human error, not highly sophisticated attacks. Truly advanced intrusions do happen, but they represent perhaps 1% of incidents, if that. The overwhelming majority of breaches exploit basic mistakes: misconfigurations, phishing, impersonation, or poor processes.
The real problem is that too many organizations treat cybersecurity like a sprint instead of a marathon. They rush to buy expensive software or AI-driven tools, often incompatible with legacy systems, instead of building a coherent, long-term strategy. They overlook third-party risks; for example, hackers may bypass your defenses entirely by attacking your accountants, law firms, or IT vendors. Cybercriminals are pragmatic: they’ll steal data from a supplier and ransom both sides.
I see companies spend millions, sometimes hundreds of millions, on security, yet their policies clash or contradict each other. They’ll invest in endpoint protection but skip employee training because a $15,000 course feels excessive. But without training, human error, from receptionists to executives, is inevitable, and AI alone won’t prevent it.
Another misconception is the supposed “cybersecurity skills shortage.” While highly specialized roles, say, cloud security experts in Switzerland, can be hard to fill, there’s no global crisis. In fact, I know many skilled professionals actively seeking opportunities. Reports claiming a billion-person talent gap are often exaggerated, sometimes to boost investor narratives. In reality, we have plenty of qualified cybersecurity experts in the US, UK, and across Europe. Remote work has only widened the pool of available talent.
So, the main mistake organizations make is not a lack of tools or people but failing to implement a coherent, long-term strategy that integrates technology, processes, and training.
Law firms themselves have become prime targets for hackers, given the high value of the client information they hold. What are the most overlooked cybersecurity risks in the legal ecosystem, and what practical steps should law firms take to strengthen their defenses?
Ilia Kolochenko: The biggest risk, especially for small and midsize law firms, is a lack of cybersecurity awareness. Many believe they’re too small to be targeted, that sophisticated threat actors only go after global firms. While it’s true that advanced attackers pursue the largest players, there are countless less-sophisticated cybercriminals constantly probing for easy targets. When they stumble across a vulnerable law firm, the incentives to attack are strong.
Law firms are uniquely sensitive to downtime. If a small construction company gets hacked, it may be inconvenient but not catastrophic; but they can often continue operating and even count on client sympathy. However, if a law firm suddenly loses access to court filings, client communications, or case files, even for a short period, the consequences can be devastating. With deadlines looming, most lawyers would quickly pay a ransom just to keep their practice running. Cybercriminals know this, and that makes law firms highly attractive targets.
Another overlooked risk is the blind faith in technology, especially AI. Too many firms think that buying the latest tool will magically solve their problems. In reality, when you deploy a new technology without carefully integrating it into existing systems and processes, you often create new vulnerabilities.
I’ve seen nightmare scenarios where a firm unknowingly exposed itself through a third-party vendor. For example, imagine a legal AI startup brought in to automate routine tasks. Without proper vetting, the firm grants them access to sensitive cloud storage. The startup, however, invested nothing in cybersecurity, and when it’s compromised, the firm’s data goes with it.
The lesson is simple: law firms, especially smaller ones, can’t afford complacency. Awareness, proper integration of tools, and rigorous third-party risk management are just as critical as firewalls and endpoint protection.
With AI increasingly integrated into both legal work and cyber defense, what do you see as the most significant risks and opportunities for the legal profession in the next five years?
Ilia Kolochenko: One of the biggest risks is overreliance on AI. We already see research suggesting that heavy dependence on AI tools can erode skills and judgment. For lawyers, that’s a serious concern. If you lean on AI for tasks like summarizing judgments, drafting motions, or writing pleadings, you may save time today, but in the long run, you risk losing the ability to perform those tasks independently. That loss of expertise is far more dangerous than occasional fake citations or disciplinary issues. Imagine lawyers who, in a few years, cannot draft a motion without the aid of ChatGPT. That’s a professional dead end.
At the same time, AI offers tremendous opportunities when used responsibly. It can accelerate research, summarize lengthy documents, or enhance tools like spell check and drafting support. I often use it to quickly capture the essence of a 200-page judgment or generate ideas for where to focus my research. It’s like a turbocharged search engine. But these are valuable accelerators, not replacements.
Looking forward, lawyers who use AI to amplify their skills, rather than outsource them, will become more creative, efficient, and competitive. But the profession must avoid sliding into complacency. AI should be a tool to empower lawyers, not a crutch that diminishes their expertise. The real risk is skill erosion; the real opportunity is harnessing AI to further enhance human intelligence and creativity.
Get the free newsletter
Subscribe for news, insights and thought leadership curated for the law firm audience.