Snyk’s Danny Allan on Making Security Developer-Friendly
Turning security into an enabler of developer productivity.
Security often feels like a roadblock to developers, but what if it could be seamlessly integrated into the development process? As software delivery becomes increasingly automated and self-service, the traditional approach to security needs a major overhaul.
Danny Allan, CTO at Snyk, shares practical insights on transforming security from a bottleneck into an enabler of developer productivity. Drawing from his extensive experience at IBM, VMware, and Veeam, Allan discusses how security teams can shift left effectively without creating friction.
Key topics covered:
Hey everybody, welcome back to the Platform Engineering Podcast. I'm your host, Cory O'Daniel. And today I am thrilled to be joined by Danny Allan, Chief Technology Officer at Snyk.
Danny's been thinking about security and infrastructure long before Platform Engineering became a buzzword. With a career that spans IBM, VMware, and most recently Veeam – where he spent seven years helping triple revenue, lead multiple product launches, and push the boundaries of data protection. Now at Snyk, he's helping redefine what developer-first security means in a world where software delivery is increasingly automated and self-service.
We're going to dig into how security is evolving inside platform teams, where security engineers fit into this new world, and what it takes to really shift security left without slowing things down. Danny, thank you so much for coming on the show, I really appreciate it.
Thanks Cory, super excited to be part of the conversation.
Yeah. So just tell me a little bit about your role at Snyk and what you brought you there.
Well, I've been in application security for a long while, as you mentioned, through IBM and VMware and those different companies. What I've noticed is that the root causes for a lot of the security issues have remained the same over the last 25 years, but what has changed is the infrastructure. As you noted, DevOps wasn't around 25 years ago, right? And so it's interesting to me. I'm an idealist. I'm passionate about improving security. And so I thought, you know, as we move into this new era of… not only just DevOps, but leveraging AI in DevOps… how can we make sure that security is built-in instead of bolted on? Which is what we've done in the past.
It has and you can see it in like the way that we name things, right? It was DevOps for the longest time and then like all of a sudden… I don't know, mid-2010s… it's just like somebody just threw Sec on there. Like, “Hey, don't forget about that security thing.” Which is like paramount to like everything that we do, right? We need to protect our users' information, but it does really just seem like it was kind of tacked on at the end.
The root causes of a lot of the security issues have remained the same over the last 25 years, but what has changed is the infrastructure.
How is Snyk thinking about the intersection of platform engineering and security? Are they treated as separate tracks, or are they increasingly becoming part of the same initiative?
Well, my belief is they need to be part of the same initiative. If you have them as separate tracks, inevitably what happens is security becomes the bolt-on after the fact. Which is what has always happened in the past – you built a new infrastructure, you tacked on security to it.
I'd like to see a future world where engineering has these principles built into it from the very beginning. So your DevOps practices essentially set you up for the future in a way that security, and governance, and everything else is part of the underlying frame.
Yeah, so how is Snyk helping security engineers outside of Snyk? So not actually on your platform, but on other platform teams. How are your tools helping security engineers get involved earlier and helping us get some of that security upfront instead of something that we're thinking about after our first breach?
Well, one of the things that I talk a lot about is singing from the same hymn book or playbook, if you will. Historically, there's been three groups that have been on three different sheets of music that were very discordant from one another. You had the developers that were writing the code, you had platform engineering that were trying to put together an infrastructure for developers to operate on top of, and then you had the security team that was yelling off in the corner saying, “Hey, but this is all coming out crappy on the back end of it.”
Part of the mission is to take those three constituents and give them a common frame of reference for what the state of the world is today. Even before you get into the transformation of the company itself, which I think is the real issue that needs to be addressed.
Yes, and that transformation of culture, I feel like that is the thing that has been always hardest about DevOps, right? I feel like a lot of times… whether it's talks or books… people are always saying, “Well, you've got to get the culture right.” And what's really hard about culture is like, you can't just talk culture into existence. You can't just say, “We're DevOps-ing now.”
Actually getting that transformation to happen really starts with processes and tools and people. And starting to solidify a culture, rather than say like, “This is the culture we're going to have. How do we get there?” You have to build that culture.
I think one of the things that's hard for many teams is like, before you get that first operations engineer, you start as a startup. You’ve got a couple of engineers. You get to this point of scale where you're like, “Oh, I’ve got to bring somebody in that knows Ops.” You bring that person in that knows Ops. Now you're going for a few more years. And now you get to that point where you need that first security engineer. And like that person is coming in so late into the picture, right? There's probably been years of development.
What are some things that you guys have seen or tools that you're working on that can help that first engineer that joins a team where there's just years of debt that's racked up? How can we get them involved and get them into this framework where they're offering value early and not just kind of drowning in security debt from years of development?
Well, one of the things that I've seen to be very successful has nothing to do with technology at all. That is put the people in the same room with pizza and beer and start a relationship, get people trusting one another. Often one of the things that fails is the security person comes in and says, “Thou shalt do this,” or “Thou shalt do that.” There's no trust established. There's no relationship established. And nothing comes of it. In fact, the opposite occurs, there's friction – “Who's this new guy? And why is he telling us what to do?”
Yeah.
One of the things for the security person, if I was coming into that situation, is just start to establish a relationship. Have an appreciation for what people have built over the last 10 years, because they're not stupid. The engineers that have been building this stuff are really, really smart, right?
And so it's not a technical cultural transformation to start, in my opinion. It's a relationship influencing people. Establishing trust between teams is probably the best place to start, and that has nothing to do with technology.
Yeah. One of the things that I found in my previous roles, when I was one of the first Ops people coming on is… it sounds silly… but like that surveying of engineers and actually trying to find that friction point that slows them down and delivery. That's where I try to focus my first few efforts, not coming in and saying, “Hey, the CTO told me to do this.” Like, “Engineers, what is making it hard to deliver?”
How can a security engineer kind of take that same approach? Is it literally just kind of like talking to people? Like, where maybe their biggest concerns are in the application. Or like day one are they jumping in and bringing in some scanning tools to kind of figure out where the problems are?
Yeah, I think it's going in and asking the questions and listening to the people on the other side. What you'll find is that the engineering and development people generally know what needs to be done. They just haven't been given the resources and time to actually do it. Same thing on the operations side. They generally know what needs to be done. And so coming in and asking a lot of questions and then working with them to prioritize which ones are the most important. And make sure that everyone's aligned on that, because sometimes you'll prioritize the wrong things or you might prioritize things without a complete understanding.
Make sure everyone's aligned and then work to start to unblock those things. And be maniacally focused on the short early wins as you're going towards the long-term cultural shift. Because, as you say, it's not coming in and telling them what they have to do. It's going to take time, it's going to be an iterative process.
Yeah. I feel like when it's not rolled out well is when you get to this point where it's like there's some work that's been done… this is the bolt-on scenario… There's been some work that's been done, I'm shipping some features, I’ve got a PR, I open it… like I'm on time for once, I'm going to hit my deadline... and then ground to a halt in a PR where there's like some security issues or compliance issues. And now these things, that are absolutely important for protecting the business, are slowing me down.
So like, how do you balance these strong security assurances without slowing down teams with layers of process and approval? How can we actually start to get this earlier in the process, rather than something that we catch maybe in CI while we're waiting for something to be merged?
Sometimes it will actually slow down before it speeds up. And I think that's okay. There's a story that sometimes people tell, or I've heard told multiple times, about this guy sees some canoes going over a waterfall and he's trying to stop all the canoes and it's very painful. And he realizes, “Wait a second, I need to let a few canoes go over the waterfall, go upstream and stop them before they get there.” And I think it's similar in the security space.
Sometimes the most effective thing that you can do is to take the time to build it into the process. I always say, for example, “What eliminated buffer overflows?” It wasn't throwing hundreds of resources at it to solve, you know, tracking all the memory as it was going through the process. It was actually going to a memory-safe language. Like that does far more to eliminate buffer overflows.
The same thing is true on DevOps. Like you could educate everyone on the proper way to secure this particular environment, but better to create a template or framework that people can reuse. I guess my point is this, take the time to figure out how to make it easy and built-in so that you're taking away cognitive load from the developers, from operations, on how to do something securely.
Take the time to figure out how to make it easy and built-in so that you're taking away cognitive load on how to do something securely.
Getting that built into the actual process. We're not looking at like, “Hey, here's the security checklist, make sure you've done this.” It's like, this is a part of how we do it. And I like that slowing down to go fast.
I'm a runner and there are so many times that like I'll go out to go running and I'll get a few blocks in and I'm like, “I half-assed tying my laces,” and like my shoes are a bit loose. It's like, if you want to run, you’ve really got to lace up your shoes, right? And it sounds like it's kind of the same thing.
You’ve got to sometimes slow down, and it's almost like that idea of error budgets in SRE. We’ve got to slow this down and figure out like what is wrong. And now we can release with confidence moving forward that things are in place and we have these security assurances and guarantees.
Yeah, 100%. And again, it's important not just to do that on your own. Do that in alignment with the various teams and pick your small wins. Like don't try and boil the ocean. You do one or two things and then you go from there.
Yeah. So I'd love to know, with your teams internally, like how do you treat your own security engineers? Are they embedded with your platform teams? Are they a part of that? Do they act kind of as a consulting layer to your platform teams? Like how does Snyk involve their security engineers in the platform?
Yeah, 100% we embed them within the team. So we have a security champions program. We have an independent product security team, that is true. And actually, I say we're drinking our own champagne – we're using own products across our own environment and in every team everywhere. However, what we have established is a Security Champions program with a Security Champion on every single team. They come together for the cutting-edge things, but we use them within the team to propagate information. Because not every engineer is going to be thinking about security every day, but you want someone on every team thinking about security in every meeting where they come together. So we found the Security Champions to be a very successful thing. And then we leverage them as a proof point when we're rolling out new initiatives. One of the initiatives for this year was threat modeling every single microservice. We would use them in that effort to threat model every single microservice that is coming through the process.
Not every engineer is going to be thinking about security every day, but you want someone on every team thinking about security in every meeting where they come together.
We also gamify it. It is something that we found to be very, very successful.
I like this.
We put teams in competition with one another. Like, for example, when we were rolling out secure IDE usage, we were like, “Okay we'll get swag for the first team that has 100% coverage of the security plugin.” We gave away cash bonuses to the person who came up with the best idea to improve the security in a particular thing that we were doing. Like, gamify it. Make it fun for the developers. Make them want to do it.
You hear that, everybody? Once you become super secure, you're no longer drinking Kool-Aid, you get champagne, and there's cash bonuses. We gotta get secure, we gotta get secure pronto. There is yield on the other side of this.
Nice. So you said it was a Secure Champions program, is that what you called it?
Yeah, Security Champions program. It's a big thing I'm passionate about.
Is that something that you guys put in the public? Is this documentation that you share outside of Snyk? Or is this just like an internal program that you've kind of modeled?
It's internal, but we didn't come up with it ourselves. We stole it from the public domain. Lots of companies do this. But one of the things that Snyk actually does with our customer base (we have 4,400 customers) is we actually will run a framework assessment of how ready they are to… well, it's essentially how far along in the cultural journey are you? And there are about 36 different steps that strong security culture companies will do. They will do a self-assessment. And what we do is we look at it and we can say, “You know, you'll get the best bang for the buck if you do X or if you do Y.” And of course, one of those things is run a Security Champions program.
How big are teams when they start to kind of invest in programs like this? Like, would you do it when you're hiring your first engineer? Would you start to look at a program like this before you've hired your first security engineer? Like, when's the right time?
The right time is as early as possible, is what I would say. It is true that most organizations, up until they have about 50 engineers, are not running a Security Champions program. However, it is always exceedingly valuable to have someone identified who's curious about security. And I would say, even from a startup, from an inception standpoint, you want someone on the team that is curious about, “How do I make operations better? How do I make security better?”
Everyone thinks security is this separate, walled garden thing. It shouldn't be. The whole point is, “How can I be curious about building better software or improving operations or improving whatever it happens to be?” And so you want someone who's curious about delivering a better outcome, a better product. And that's the type of person that you should focus on making the Champion. In fact, I would argue, don't make your most senior developer, who has the most knowledge, the Champion. You want someone who's curious about improving things to be the one that is the Security Champion on the team. And that can start at the very beginning. You can have a team of three or four people, as long as one of the people is curious about making it better - that's your Champion.
It's really interesting that you said not your most seasoned person. Especially with where we are in the world today, with the panic over “Will AI take our jobs? Are we all not going to be writing software 12 months from now?” If you look at the Stack Overflow surveys over the past few years, like the amount of people that are security engineers or have security experience has declined… and this is all operational roles… has declined relative to the number of software developers, because we’re just making them so fast.
Yes.
Security engineers happen to be one of these smaller groups of engineers that we have. I mean, scale is important, uptime is important, but there's nothing more important than protecting our customers' data. And I feel like that is a tip right there for just job security - we need more people with this deeper experience.
I do feel like it does tend to shift to your most seasoned person, right? It's like, “Oh, they've been here the longest, they know the most, let them take on this role.” But it is interesting, if we're exploring a new field… you know, security… like, why not have somebody that has the curiosity and the bandwidth for learning take that on? And maybe that is a niche that you can kind of create in your business. And it's certainly a niche industry-wide.
Yeah. Reward that curiosity and motivation, and cultivate it, is always my recommendation. Sometimes what we do is we continue to load things on our most senior developers who have the most experience. And what you're really doing is you're overloading the people that shouldn't be focused on that, quite honestly. They should be looking at other things. And so you get 10% of your people that are overwhelmed, and you're underutilizing the other 80%. If you cultivate curiosity, by definition, what you're doing is cultivating better DevOps, better DevSecOps, better all of these things.
If you cultivate curiosity, by definition, you're cultivating better DevSecOps.
Yeah. Sometimes it can feel like, “I'm the newer person on the team, I'm never getting the interesting thing to work on. It's always going to, you know, the folks up here that have a bit more experience.” If you really want to shine on a team, there's making that button shinier, bluer, greener, clickier or whatever, but then there's the value of like, “You know, I've started to use some security tools. I actually found some issues in our code base.” Like that is hero status right there, especially at an organization that hasn't really focused on security to date.
Yeah. And one of the things that is useful for the person in that role is to focus on kind of the cause and the outcome, but not the issue. I don't know what security issues you’ve found… but say buffer overflow or SQL injection. It doesn't matter. What is more relevant is, why did that thing happen? To understand that. Because that's what you're trying to prevent in the future. Like, we didn't do input validation from a source. And then also explain the outcome of this. The result is we're going to lose customer data. Because it's the outcome that matters and it's the cause that matters. In some ways, the security issue itself doesn't really matter – it's just a means between those two things.
Yeah, sequel injections happen. Don't let them.
Yes.
SQL injections happen – I'm going to get a new bumper sticker. Do you guys have great swag like that? Do you have really witty security swag?
We do have some pretty cool security swag. In fact, I used to be at a company that was 100% security and we had pen testers and things. And so everyone had the leet t-shirts that had all the, you know, the geek text on there.
So let me ask you a question. I would love to know… we used to do this, we still do it every once in a while on the show… we have this thing that we call Trash Ops. It's a list of like a hundred questions of just some of the most cliché things I've seen people do in Operations and DevOps and whatnot… I would love to know from you, like you've got a career where you've been at some major companies, could you share a moment… and if this is too NDA, please let me know… Where was a place where you just saw, “I really wish (I'm sure that you have this all the time, especially being a security-minded person) that we could have had the security team involved earlier here.”
Well, every time you run into an issue, I think, “I wish the security team was involved earlier here.” It's always a balance though, Cory, because… You made the comment of, “the most important thing is the customer's data,” and I don't know if that's true. Maybe it is true.
Okay.
But the company doesn't exist unless you're actually selling a product.
Yeah, valid.
And so there is a danger in pivoting over to it needs to be secure. Because if that is really the reality… I always tell people the only secure computer is the one that's encased in concrete and buried 10 feet in the ground, right?
Yeah, not usable.
There is no security. And so you have to have a balance in risk and take a risk-based approach to these things. You're not going to secure everything. And that's okay, not to secure everything. It's about understanding the risk and that's why it's so important to have the Security team aligned with the Ops team aligned with the Development team. Because sometimes you're going to agree to, “You know what, we know it's not secure but the business matters too and we're going to sign off on that risk.”
Yeah, I really like that.
My day job is infrastructure self-service and when I talk to Operations engineers, one of the things that they're… like when you think about going back to like the origin of DevOps. Dev's job, like you're saying, is to sell a product. Their job is to introduce change, right? And the Operations engineer's job is to minimize outages and resist that change. And that was really a big part of the original problem that led to the idea of DevOps… And so as we get to this world where there's a lot more self-service around infrastructure, the teams that I've met, where they tend to be hesitant is, “Yes, my developers do need to move faster. We do want to enable them to move faster. But as we give them the keys to the kingdom, now we don't know what they've done.”
I think that's a thing that’s tough for many Ops people. It's like, “I do want you to go faster, but if I say you can deploy whatever you want. You can deploy whatever you want.” Which is scary to some folks, right?
Yes.
When thinking about this from a security engineer perspective, what are the most common security or compliance pitfalls that you see in common self-service infrastructure models?
That you don't make it easy. If you look at platform engineering as a parallel to security, I have to imagine that they're actually very similar. In other words, the way that you can help the organization go fast, if you're a platform engineer, is you are acting in service of the development team. You're giving them a platform that can enable them to do operations in a secure way without them thinking about doing that in a secure way. You're giving them predefined templates that make it easy for them, but also make the operation simple.
I see security as the exact same thing. You don't come in with a checklist of you have to check one, two, three, four, five. You come in and you give them a framework that they cannot bypass, that builds security into the practice. So make this very tactical. An example of this is it's easy for me to say, “We're going to come in and do a security scan right before you deploy. It's going to be part of the CI pipeline.” Isn't it better to define some pipelines and, within that pipeline, you've already embedded the checks, you've already automated the capture of that issue over to the ticketing system? And maybe actually it goes further. It generates the actual fix for it and it automatically accepts it or it gives it to them to accept it. But if you make it easy, then they will accept it. And I see that really as security's job – it’s not to make things more complex, but to make it easier for the developer. Similar to platform engineering in many ways.
Yeah. When you're trying to make it make it easier for them, this kind of like framework, as you said, how do you actually get that into their hands? Like code wise? It this like it's a part of my local build? Maybe it's part of my test suite or part of my linter where it's running some of these scanning tools locally before I even get to commit, right? Like I save right now in VS code and it formats my code. Are we also doing like security scanning at this point in time so that we're not waiting 15 minutes into a build on a PR?
Well, I guess I see it at even more of an infrastructure level. At an infrastructure level, first of all, it could be as simple as giving them predefined, pre-hardened containers that have already gone through all of the hardening that needs to be done.
If you get down to a code level, it's having a library available to them of the acceptable ways to do logging or input validation. That they don't have to go and create that. In fact, you could argue that they shouldn't create those things. It's already been well-defined. We know this when it comes to cryptography, by the way – it's been beaten into the developer's head that they shouldn't go write their own crypto library. So we give them crypto libraries, right?
Yeah.
The same thing is true of all the functions that lead to security issues. Things that do output encoding, and input validation, and database connections, or whatever it happens to be. That is where you provide them a library of predefined, pre-hardened, and pre-certified. Then you can focus on the things that are the exceptions to what you've provided for them.
Yeah. I love the idea around container security. In this world where there's a lot of concern about like your supply chain, supply chain attacks, and like the provenance… given that there are things like distro lists and some of the other options, other of these like pre-hardened containers… in a world where we have those now available to us. It's not just us grabbing the nginx container off of Docker Hub anymore… How important do you think it is for earlier-stage teams and later-stage teams to still own, not the builds of their containers, but actually own container definitions themselves? Is that still important? Or is leaning into tools like DistroList and Chainguard good enough? Or should we still be trying to own our own containers and we're taking, you know, forks of those that we build ourselves?
Well, I absolutely believe that you should have pre-hardened containers. In fact, if you go down regulated environments… like we're in the process of being FedRAMP certified, we have the authority to operate at the moment and we're going for the full certification… if you do that type of thing, you need to have pre-certified, pre-hardened containers. And my belief is that that is a good practice for everyone. And what you should be doing as a security team or as a DevOps team would be documenting the processes when that doesn't work, because inevitably you're going to come to exceptions. And so what you do is you say, “Here's the default way to do it.” And if you are going around that or it doesn't work (because it's not going to work in all scenarios), then you make it very simple and seamless and clear as to what the action should be when they can't accept the Chainguard image or the pre-certified image.
Okay, very, cool. Like that is a good starting point. As far as like taking forks of that, should we own it? Should we be doing our builds of those images ourselves? Or do you think, in general, it's safe to pull from these public registries where these have been published already?
I'm not going to come in and mandate and say you have to do it one way or the other. Some companies, that's not their expertise, they would outsource it to a third party. Certainly when you get to any significant size, I would say, like at the size of Snyk, we need to have that internal expertise ourselves. But, you know, we're a larger company. We have a significant security practice internally.
I think the important thing is that you do it. Whether you do it yourself or whether you outsource it to a third party, the important thing is that there is that practice in place.
So I have another question, kind of going back to like bringing in security for the first time. One of the things that's always hard is like when you're one of these teammates that contribute technically to the product, but you can't see their work. It's sometimes hard to see an operations engineer's work. You're not getting a lot of high-fives around the office from product managers because your load balancer is fast. But you make something shiny and new on the site that customers can click on and people are like, “Oh yeah, good job on that.”
As a security engineer joining a team, for your non-technical partners, what are some ways that you can illustrate your value and what you're doing so that your product managers, product owners, and other stakeholders in the product can understand the importance of security? We all know it's “important”, compliance says so, but how can we actually show our value to these non-technical stakeholders?
The good news is, on the security side, I think most people recognize the value of security because we hear of breaches every day in the news.
Just keep those breaches up. That's how I let other people…
Keep the breaches up.
Yeah, keep the breaches up.
Or go read the Verizon data breach investigation report. It's been coming out for 20 years and it only gets worse every single year.
Oh, gosh.
No, the good news is security… I think there's a fairly good understanding. One of the things that you can do, I'll go back though to an earlier comment, which was gamify it. Run capture the flag type things. It's fun, people like to do that type of thing.
When I started in the security space, if you go way back, there was a website, actually I think it might be still around, it's called hackthissite.org. They had these lessons and you started at lesson one and it was fun. Like you broke one level, and you went to the next level, and you went to the next level, and it was like playing a computer game. It was Fortnite.
My belief is that most technology people enjoy that type of thing. So you're doing two things: (1) you are educating them while making it fun, (2) while teaching them about the serious outcomes of missing this as part of your process.
Yeah, there's like two things that are exciting about that. One is like just the puzzle of it, right? I mean, I feel like that is just something that all software engineers are kind of driven by – like solving this curiosity. But then, when you figure out like what it is, sometimes the quirk itself that led to it is even the most interesting part. It’s like, I figured out how to crack it. But then, wow, like what I found in the middle of this thing that let me take advantage of this system is sometimes in and of itself extremely interesting and exciting. I love that.
Okay, one last question and then I would love to just kind of like dive into Snyk a bit more, but one last question on platform engineering.
It's interesting, like you don't know what you don't know, right? There are a lot of things in engineering that are easy to measure. Like the site is going faster – we brought in a DevOps person and they did some things, they changed some architecture, and the site’s much faster. Brought in some SRE, we've lowered the amount of times we're serving 500 errors. Brought in some new engineers, they optimized some stuff around the UX for selling stuff – making more money.
How can you measure the success of your security engineers in your platform engineering initiatives? What are great ways to measure what they're doing and be able to quantitatively show the impact and value they're adding to a team?
I don't know if I have an immediate good answer to that because it's going to be different in everything that they are implementing. What I would say… because you're right, I mean, it may end up that the site is faster and so your measurement in that case is “What was the time to service before and what was the time to service afterwards?” So there's going to be different types of measurements.
However, I do believe that your platform engineering are probably the most critical constituent of those three constituents: developers, security and platform engineering. Because the team that's actually going to implement those controls, and the optimizations, and the improvements, even security… security teams generally don't know how to implement the control itself within the environment, and developers don't want to, and there's too many of them... it's the platform engineering team that actually comes along and implements it.
But how do you measure it? For security, I know that most companies that I talk to look at mean time to resolution as probably the most common metric. It's just that I don't think that that is a consistent metric for all of the things that a platform engineering or a DevOps team is doing. Sometimes it's not mean time to resolution, sometimes it's the performance of the service, sometimes it's the issues that are not introduced.
One of the things that we've been measuring recently is preventable issues. How many issues are you preventing back before it even gets into the pipeline? So the measurements are going to be different. However, if you ask me who is the most critical individual in the chain, I actually think it's probably the platform engineering team.
Okay. That's always one that's been a bit hard for me, like as a team leader, how to quantify like the work that is being done there. Because you could find issues all day long but, like you said earlier, you have to balance that security versus… like sometimes that vulnerability is kind of how the product works.
It's like, “Eh, we built it around this thing.” I've seen that before. We're like, “This is not security.” and it's like, “Well, that's how the entire thing works.”And it's like, “Okay, well, shit, I guess it is what it is.”
Other times like you find an issue and it's like “I’ve found a security vulnerability” but between how backlogged engineers are or just like the time it might take to resolve this and roll it out… it could be a minute fix but it could just be months to get something out, just depending on like pipeline. Or if you have two services, right? And you're like, “Okay I’ve got to get a vendor to change, the TLS certificate they're using.” Or whatever, right? There might be things outside of your control, especially as you're interacting between systems, and it starts to get kind of hard.
So I really like that.
I would love to hear a bit about what Snyk is doing today? Like, what are some of the ways that we can start using Snyk in our own products and in our platform engineering practices? And if you are comfortable talking about it, I would love to know how you're viewing security in this world of AI, where we're just kind of pushing our data out to third parties.
Well, let me address the last one first, AI, simply because it's the one that comes up most frequently. I think about AI in three different ways.
First of all, developers are using AI to generate code, unbelievable amounts of code. In fact, you hear about vibe coding. And the Anthropic CEO said a few weeks ago that within six months, 90% of code would be created by these coding assistants in this way. I don't know whether the 90% is true or not, but what is definitely true is it's accelerating the velocity of the code. And you made the point earlier that the ratios are not necessarily keeping measure. If the code is going up, but you're doing it with the same number of security people, that's an issue. So there's a huge opportunity in this first category of AI for platform engineering to introduce security guardrails. So every time it goes through the pipeline… whether it be, you know, when it's being checked in as part of a PR check, or whether it's through the CI-CD pipeline, or in the deployment process… there's a huge opportunity to put in security gates that have a policy that say this is what you should do. And that will enable you to adopt AI coding assistance without the worry that there's a whole downstream effect of lots of vulnerabilities being introduced.
Yeah.
Second thing on AI is, obviously we're using it within our product itself in very extensive ways. We've been using it for five years already. We acquired a company called DeepCode back in 2020, which did the assessment, the detection of vulnerabilities using symbolic regression analysis.
Oh, yeah, yeah, yeah.
So, you know, huge gains that we're providing there. We're generating fixes. We're doing generative AI to generate the fix. We're using AI to determine which functions are vulnerable in open source components… because that's another one where just because you're using an open source component that has a CVE doesn't mean that you're actually vulnerable. You may not be calling the function within that open source component that actually results in a vulnerability… and so we're using AI ourselves to rate the risk of all of those open source components.
I love that. That is really cool. Because that is one of those things that does suck… you install its scanner and all of a sudden you get 35 PRs and you're like, “Oh man, that's a lot of stuff.” But it's like I don't use this function at all. But it's hard to even know. Like, “Okay, I got to dig into this thing and figure out like why.” I can see it's changing my code, I can see it's changed the version of something. But why? And do I need to accept that change? Do I need to introduce a change? And it might not be something that impacts you.
That is super cool. I like that.
And that's all about prioritization. What we should be doing is helping people understand the risk and the prioritization of the things that we're finding. Super important.
And then the third area that I think is important is we're starting to build applications with AI now. And that, to me, is both super cool and also very scary. The super cool part is AI applications are amazing. I can say go create an image of this, or create me code that does this, or do this, and it makes me way more productive than I have been in the past. However, the scary part of it is that there are not very good controls right now around AI. And it could be as simple as the LLM itself. Like you can jailbreak out of an LLM, it might be vulnerable. It may leak sensitive data to the person on the other side of that, that's a problem that needs to be evaluated. And then the agents… we're doing agentic AI where you're writing code to use the AI to do something. There's still traditional software in there that needs to be secured.
So it is important, as we build this next generation of modern software, that the code that is driving it… that the LLMs that are driving it… that the implementation and architecture that are driving it are secure. One of the most common protocols right now for agents to talk to one another in an AI world is MCP (Model Context Protocol) and there's…I won't say no authentication and authorization, but it's about as close to no authentication and authorization as you can possibly get. My point is this, as we build AI software, we need to be thinking about security. And obviously this is what Snyk is very focused on.
So given that, what are some of the products that Snyk is offering that the platform engineers can start to build into their platforms to make their software and infrastructure more secure by default?
Well, I think about it as all just the Snyk platform. We have many different analysis techniques… is probably the better way of looking at it.
One of the techniques that we use is software composition analysis. It's our open source product. Look at all the open source components: Do they have vulnerabilities? Where are they used? Are those functions being called?
We have static application security testing. Which is looking at the custom code that is introduced. Tracing every source to every sync and saying, “Has it been validated?”
And these can all be implemented, by the way, in the pipeline. It doesn't matter whether your using GitHub Actions or Jenkins or… we use CircleCI. You can build this type of analysis into the pipeline.
We have other techniques. We have the IaC component, which looks at Infrastructure as Code in the Terraform and all that, and is it done securely? We have the container analysis, which looks at all the layers of the container to say, “Are there vulnerabilities introduced within the container?” We have a web product that will look at a built application and will crawl through it and look at all the entry points, the cookies, and the forms, and the query strings, and all the HTTP headers, and do input fault injection testing to find vulnerabilities. We have an API testing capability.
So we have all these techniques within our platform, but the key point is we can embed it within your lifecycle, all the way back to the very beginning. Right back in the developer IDE. As they're writing code, we can underline it and say, “Hey, this is a problem. Here's five suggested fixes for this particular problem.”
And that's where you get like the real shift left, right? I'm bringing this tooling all the way down to like where I'm literally starting to work on a feature, right? And being able to catch things like, “Hey, you just interpolated something right into a SQL prompt. You're probably going to have a bad time.”
So, as a platform team, I can bring in Snyk products into the IDEs of my developers, start getting some base-level security there, but then I can bring it up all the way into… whether it's my container builds, scanning there… scanning my libraries right in my CI… but even… I didn't know you guys had an IaC product, that's super exciting. How does that work? Is that doing compliance type scanning, using like CIS benchmarks and whatnot, or is that actually like probing what the IaC is creating in the cloud?
Yeah, so we don't do the runtime side of it. It's more the configuration side of it, the pre-build side of it. The configuration, if you will.
Okay.
But we do integrate with the runtime as well. Because ultimately what we want to do is find things in the runtime and trace it all the way back to where it was introduced within the environment. In the pipeline that you just mentioned, if I can suggest two things to the audience.
One is – yes, make sure that the developers are using it within the IDE, but I always recommend that people introduce their first gate as part of the PR check. When the developers or the team is checking things in to the source control management, that is the perfect time to implement the first gate. By the way, I don't recommend making it a hard gate, even a critical vulnerability. I wouldn't say like, “Block them. Don't allow them to check it in.” Instead, I would notify them as part of the PR check. Say, “Hey, you have this particular issue. By the way, you should download the IDE plugin.” Like point them in all the right things. But, if you're trying to do a cultural shift, the PR gate's probably the best place to start to introduce a control, even if you don't hard block the development team.
Yeah, that's bringing in something like pre-commit, right? Where it's just like, I'm going to commit and it's like, “Boom, I found the issue.” Not hard blocking the person, I think that that is interesting.
I'm a person that, when I'm developing, I tend to commit early and commit very often. And I love… when GitHub added that draft PR feature, I felt like that was made just for me because I'll have like three lines of code, nowhere near complete, and I'm like, “I'm going to put the people on this zone and start getting opinions.” And that's for my engineering partners. But when you're starting to think about your operations partners and your security partners, as an engineer, if you want to get that collaboration… which is really what's going to give you that good DevSecOps, like good security embedded, like platform team… like that collaboration right there of like getting it out there, “Oh, I see that I introduced something to the breach.” Being able to have a security engineer come in and be like, “Actually, this is low risk. Earmark this, like it's something we should look at in the future, but it shouldn't slow you down.” Like that is one of those places where you get that opportunity to introduce collaboration early. That could even save you time. Again, because like you've said a couple of times here, it's not just about the vulnerability, it's about assessing the risk of how that vulnerability impacts the software.
It's not just about the vulnerability, it's about assessing the risk of how that vulnerability impacts the software.
Yes.
And is it the most important thing to solve right now? And when you see an alert come back for the first time, like it could be panicking to you and you're like, “Shit, I gotta go fix this.” When in reality, like it might not be a big deal at all. So I really like that.
Yeah, it may be high impact, but low likelihood. And actually it's an argument why you continuously do the security assessment all the way up to deployment, because you might have a compensating control in the real-world environment. You also may have the reverse of that. You may be introducing… you know, it might be going on to a container that's vulnerable. Doing it all the way along so that you have no surprises in the process is what's key.
Yeah, and that's just going to stop you from getting like hung up in that red tape at the end, right? Like we got it in early. We got eyes on it early. Whether or not it's a big issue, like we find that out quick and it's not, you know, Friday afternoon while you're trying to launch this feature going like, “Oh damn, I'm going to be here through Sunday all of a sudden.”
Awesome. Well, Danny, I really appreciate you coming on the show today. This has been a lot of fun for me. This is super exciting. Where can people find you online?
Well, of course you can always come to the Snyk website at Snyk.io, but I'm on LinkedIn, Danny Allan, A-L-L-A-N, people always spell it wrong, but easy to find me on LinkedIn, and that's where I do most of my social posting.
Okay, LinkedIn's the best. That's my number one now, I love LinkedIn. I know that's a nerdy thing to like, but I feel like the best technical conversations are happening there. I'm sorry Twitter and Bluesky fans, but LinkedIn's where it's at.
Awesome, I love it. Well, thanks so much for coming on the show. I want to have you back at some point in time just to talk AI and security. When you started talking about it, I was like, “Oh man, this is an entire episode, and I know I've only got him for an hour today.” But awesome, thanks so much, I really appreciate it.