
Retired. Finally.
Over four years later than originally planned—and exactly on my anniversary of 23 years with atsec—I came one last time to our Munich office and turned in my laptop, my phone, and the office keys.
No more projects. No emails to answer. No status calls to attend. No reports to write. Just time for family and friends now.
Except for this last blog entry that Sal asked me to write about the experience of dedicating my professional career to the seemingly boring and dry subject of IT security, and to atsec in particular. And, conversely, how it wound up being anything but boring, and instead was exciting, rewarding, and at times just wild. It is a strange privilege to write your own eulogy. But sitting down to reflect on those 23 years, I found something unexpected: I don’t see this being about myself very much at all.
Nothing I Did Alone
Everything that comes to mind when I look back—every successful evaluation, every standard we helped shape, every customer relationship built on trust—was the result of a team. Not my team, not my achievements. Our work.
My insights came from discussions with atsec colleagues across Munich, Austin, Stockholm, Rome, and Beijing. The quality came from intense exchanges with customers who pushed us to think harder and to take on tasks that nobody else dared and that were even deemed impossible, like our first evaluation of Linux. Our expertise grew because we were a group of people who genuinely loved what we were doing and who didn’t hesitate to tell each other when something wasn’t good enough yet.
That culture—a culture of honest, respectful, technically rigorous collaboration—is the thing I will miss most.
What IT Security Actually Is
When people outside the industry hear “IT security,” they often imagine either impenetrable fortress walls or catastrophic breaches. The reality is more interesting and, I think, more human than that.
What we do at atsec is examine trustworthiness. We look at complex products and systems—operating systems, cryptographic modules, network equipment—and we ask: does this actually do what it claims to do? Is the security documented, implemented, and testable? Can it be verified independently?
Those questions matter. They matter to governments, to companies, to the ordinary person whose data flows through the systems we evaluate. The fact that atsec has been part of shaping the standards that frame those questions, from Common Criteria to FIPS 140-3 to the new EU Common Criteria scheme, is something I carry with genuine pride. Not personal pride. Team pride.
The Thing We Actually Produce: Trust
I just said that what we do is examine trust. Let me stay with that thought for a moment, because I think it matters more now than it ever did during my career.
Our work is not only about technical expertise. It is about the trust that our customers—and their customers—place in the results of our evaluations and assessments. When a company ships a product carrying a certification we helped earn, they are not just selling a technical specification, they are passing along and adding to a piece of trust that runs from people designing, implementing, testing and maintaining the product, through the evaluators in our labs, through the certifiers guarding the scheme and awarding the certificates, to the end user who may never know any of us exist. This constitutes a chain where every link must hold to ensure its contribution to the overall trust in the product under scrutiny remains valid.
This is not abstract. We are living through a period in which global actors—state-level and otherwise—are deliberately working to erode trust in institutions. In standards bodies. In certification schemes. In the very idea that independent verification means something. The goal is to create a world where no one can rely on anything they haven’t personally verified, and where complex decisions become impossible to make with any confidence.
That erosion is dangerous, because trust is not a luxury; trust is the mechanism that allows human beings to function in an environment they cannot fully comprehend. None of us can evaluate every product we use, every system we depend on, every institution whose decisions affect our lives. We delegate that judgment to structures we trust—and we trust those structures because they have earned it, through independence, through integrity, through consistent and verifiable behavior over time.
atsec has spent 26 years earning that trust. Not by being the loudest voice in the room, but by being the one whose work holds up when someone looks closely. Our independence from vendors is not a marketing claim—it is the structural guarantee that our judgment belongs to no one but the standards and the evidence. That is what makes us a meaningful part of the chain. I leave knowing that chain still needs people who take it seriously. I am glad the ones who remain with atsec do.
A Company Like No Other
I won’t pretend that every day was easy or that every project went smoothly. But I will say this: I never once doubted that I was working with people of integrity.
atsec’s philosophy is to act with integrity, focus solely on security assessment and evaluation, and remain completely independent—not affiliated with any hardware or software vendor, never selling anything other than expertise. That’s not just a mission statement: in 23 years, I watched my colleagues live it, day after day.
That independence is rare. It creates a kind of freedom in the work—you give the honest answer, not the convenient one. And it creates a kind of trust with customers that is hard to build and easy to lose. We never lost it.
What I find harder to describe, but equally important, is the culture that grew from that commitment. Nobody planned it. There was no offsite workshop where we decided what kind of company we wanted to be. It simply emerged, because everyone was trying to do the right thing, and doing the right thing turned out to have consequences for how people acted and treated each other.
The most visible one: when a colleague reviews your work at atsec, they hold nothing back. No diplomatic softening, no deference to seniority. If the document isn’t good enough, it isn’t good enough—regardless of whether it was written by a newcomer or by the most senior person in the room. That can feel, from the outside, a little rustic. The feedback is direct. Sometimes blunt. I, myself, have been the victim of it, scratching my ego. But I learned to be truly grateful for it. Because the intensity of the review is itself a form of respect. It says:
“I take your work seriously enough to engage with it fully. I am sharing what I know with you, not protecting you from it.”
The colleague who tears your document apart is also the colleague who will defend it in front of a customer once it is right. That combination—honest internal criticism, shared external commitment—is not something you can install as a policy. It is also almost impossible to formally audit. It has to grow. At atsec, it grew.
What Comes Next — For You, and for Me
I’ll be honest: I envy my colleagues a little.
Not because I want to stay—I don’t, the time is right, and the grandchildren are waiting. But because the field is becoming genuinely exciting again in ways that only come along once or twice in a career. And I won’t be there for it.
AI tools are about to change what is possible in our kind of work in a fundamental way. Today, when we examine evidence—checking consistency, completeness, traceability across hundreds of documents and test results—we work with samples. We pick representative portions and reason from them. We do it carefully and professionally, but we do it knowing there is more we didn’t look at. That constraint is not laziness; it is the simple reality of human bandwidth.
That constraint is lifting. The tools now emerging will allow evaluators to comb through the entirety of the evidence, not only a representative slice of it. To flag inconsistencies across thousands of pages that no human team could have held in working memory simultaneously. To ask questions of a document corpus the way we currently ask questions of a single document. The quality ceiling for evaluation work is about to rise significantly—and the people at atsec who get to explore those tools in the context of real, complex evaluations are in for something remarkable.
But—and this is the part I want to stress because it is so important—the tools are tools. Nothing more.
The trust chain I described earlier does not run through software. It runs through people who have earned credibility over time, through institutions whose independence has been tested and held, through experts whose judgment can be questioned, challenged, and defended in a conversation. AI can help an expert be more thorough. It cannot be the expert. Not because the technology isn’t impressive—it is—but because trust, as a social and institutional mechanism, requires human accountability. Someone must be answerable. Someone must have skin in the game.
For the foreseeable future, that someone is us. The human experts whose names appear on the evaluation reports, whose professional reputations are bound to the conclusions they sign off on. Remove them from the chain and you don’t have a faster process—you have a broken one.
That is why I leave without worry. The work will change. The tools will improve. But the need for people who have earned the right to be trusted—that need is not going away. If anything, in a world where AI-generated content is everywhere and institutional trust is under pressure, it is growing.
There is one more consequence of this shift that I think deserves to be named—because it points toward something bigger than better tooling.
If AI tools allow evaluation labs to comb through evidence more thoroughly, they also allow manufacturers to do exactly the same thing. A vendor who uses these tools systematically throughout development can continuously verify the consistency and completeness of their own security documentation—before the evaluator ever sees it. The boundary between development and evaluation begins to blur.
This means the role of the evaluation lab will shift. We will spend less time re-running checks that the manufacturer has already run, and more time asking a different set of questions: How sensibly were the tools applied? How complete and consistent is their usage across the development process? How robust are the internal processes that govern that usage? And critically: how well is all of this documented, so that the decisions made on the basis of AI output can themselves be examined and trusted?
In other words: the evaluator becomes, in part, an auditor of a process rather than solely a tester of an artifact. The expertise required does not diminish—it changes shape. And the independence and integrity that underpin the trust chain remain just as essential as before, perhaps more so, because the processes being audited will be less visible and harder to challenge than a test result on a page.
In one of my last projects, I had the pleasure to work with colleagues from BSI on their scheme for Germany’s national approval of IT products handling classified information. They saw this upcoming shift already some years ago and came up with a framework that implements those new requirements quite successfully. Therefore, I think it deserves wider international recognition and adoption. Just sayin’ …
As for me: I will enjoy travelling, hiking, and biking with my beloved wife. I will spend time with my grandchildren on silly things that will drive their parents mad, and meet old friends. I will cook, I will read the books that piled up over the years while enjoying a glass of good wine. And I will occasionally, I hope, visit my dear atsec colleagues in their office when I’m strolling through Munich.
Everything will be good. Really good. For all of us. Thank you so much for the incredible time I had with all of you!
-Gerald




