Episode 118 of the Public Key podcast is here !! Web3 was starting to look like the wild west, with smart contract compromises and sophisticated attacks by hackers, but with law enforcement paying serious attention and builders like Brian Pak, Co-founder and CEO of ChainLight, the industry is starting to shine the light on these illicit actors and combat web3 and Defi hacks.
You can listen or subscribe now on Spotify, Apple, or Audible. Keep reading for a full preview of episode 118.
Public Key Episode 118: Securing Web3: How ChainLight Is Battling DeFi Hacks
“We also should not pretend that law enforcement is going to solve the problem. It only helps, but it doesn’t completely solve the problem. There will always be attackers outside the reach of the law” – Brian Pak
Web3 was starting to look like the wild west, with smart contract compromises and sophisticated attacks by hackers, but with law enforcement paying serious attention and builders like our guest, Brian Pak, Co-founder and CEO of ChainLight, the industry is starting to shine the light on these illicit actors and combat web3 and Defi hacks.
Ian Andrews (CMO, Chainalysis) sits down with Brian to discuss the early days of ChainLight, from discovering early Ethereum bugs to creating innovative security solutions like Digital Asset Risk Tracker (DART) and the Relic Protocol.
The duo explore major web3 and DeFi exploits, white hat hacking ethics, and South Korean crypto politics. Brian shares how the newly created crypto threat-sharing center (SEAL) is striving to enhance the safety and transparency of the Web3 ecosystem and the emergence of law enforcement engagement.
Quote of the episode
“Web3 was a wild west and no one thought that, you know, law enforcement was paying attention or cared. That has clearly changed and law enforcement is showing, you know, through a clear action that they will pursue these cases” – Brian Pak (Co-founder and CEO, ChainLight)
Minute-by-minute episode breakdown
2 | Brian’s journey into crypto and discovering Ethereum bugs in the early days
5 | Smart contract code audits and continuing to learn about new web3 attack vectors
8 | Identifying new and old attack vectors like price oracle manipulation and bridge vulnerabilities
10 | What is the Digital Asset Risk Tracker (DART) and how it identifies illicit trends in memecoin projects
14 |ChainLight introduces Relic Protocol which let smart contracts access historical data without intermediaries
20 | The crypto regulatory framework in South Korea and the impact it has on Singapore
24 | How white hat hackers have been given a bad name in crypto
28 | Building interoperability and discussions around Chainlink’s Cross-Chain Interoperability Protocol (CCIP)
31 | Introducing LUMOS and how Chainlight is illuminating the Shadows of Web3 Hacks
33 | Chainlink joins white hat hacker group SEAL to create crypto threat-sharing center
Related resources
Check out more resources provided by Chainalysis that perfectly complement this episode of the Public Key.
Speakers on today’s episode
- Ian Andrews * Host * (Chief Marketing Officer, Chainalysis)
- Brian Pak (Co-founder and CEO, ChainLight)
This website may contain links to third-party sites that are not under the control of Chainalysis, Inc. or its affiliates (collectively “Chainalysis”). Access to such information does not imply association with, endorsement of, approval of, or recommendation by Chainalysis of the site or its operators, and Chainalysis is not responsible for the products, services, or other content hosted therein.
Our podcasts are for informational purposes only, and are not intended to provide legal, tax, financial, or investment advice. Listeners should consult their own advisors before making these types of decisions. Chainalysis has no responsibility or liability for any decision made or any other acts or omissions in connection with your use of this material.
Chainalysis does not guarantee or warrant the accuracy, completeness, timeliness, suitability or validity of the information in any particular podcast and will not be responsible for any claim attributable to errors, omissions, or other inaccuracies of any part of such material.
Unless stated otherwise, reference to any specific product or entity does not constitute an endorsement or recommendation by Chainalysis. The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent. Views and opinions expressed by Chainalysis employees are those of the employees and do not necessarily reflect the views of the company.
Transcript
Ian:
Hey, everyone. Welcome back to another episode of Public Key. This is your host, Ian Andrews. I am joined by Brian Pak, who is the Co-founder and CEO at ChainLight. Brian, welcome to the show.
Brian:
Hello. Thanks for having me here.
Ian:
Brian, I’ve spent some prep time kind of digging in on the work that your company ChainLight’s done, and it is super impressive. I think there’s some stories that we’re going to get into here as we dig in in the podcast some of the early days hacking and pen testing work. But maybe tell us about ChainLight today because I suspect there’s some people out there that are maybe not familiar with who the company is and the work that you do.
Brian:
Yeah, so ChainLight is a security consulting firm as well as we build security related solutions that is focused on Web3. So our mission is to build Web3 a secure place, so that way we have mass adoptions of Web3, where people can safely interact and play in the field. We’ve done a lot of audits. Yeah, we’ve also are building products that help Web3 become more secure.
Ian:
Well, it’s certainly needed, right? I think last year, approaching $2 billion in value is stolen in hacks from DeFi protocols, and dApps, and exchanges. It seems like the intensity of the threat landscape is only increasing every day. I know you’re based in Korea. Does ChainLight focus only on the Korean market or are you working around the world?
Brian:
Oh, we’re definitely working around the world. We have all the customers from Korea, Singapore, US, and other countries as well.
Ian:
Very cool.
Brian:
We’re working the globe.
Ian:
Yeah. How big is the team?
Brian:
We have about 20 people-
Ian:
Okay.
Brian:
… that are focusing on Web3. Yep.
Ian:
Yeah. For the Web3 market, that’s a big team. I’m really curious about how people get into the space. So talk to me about what attracted you to the world of cryptocurrency. When was the first time you remember? Because you’ve been in security I think most of your career, but when was the first time you came across crypto?
Brian:
Oh, I think the actual first time that I interacted with crypto was back in 2010, 2011 when I was playing with a Bitcoin.
Ian:
Yeah.
Brian:
Back then, it was kind of nothing. No one really cared too much about it. But the concept of decentralized and distributed, this currency was very interesting. And you mine these Bitcoin. So I was kind of playing around with the node and stuff, but I wasn’t really getting into cyber security part of it. It was just more an interesting activity. But then it was, I guess, in 2016, 2017 when we, as a company, started looking into the field because we’ve been getting request from our customers like, “Hey, do you do blockchain?” We actually come from Web2 world.
Ian:
Yeah.
Brian:
So we’ve been doing a lot of browser security, operating system security, that sort of stuff. But then, blockchain kind of became the natural target after that. And back in 2016, 2017 days, we started looking into Ethereum. And then we actually found a couple of the bugs, vulnerabilities, in Ethereum node that could crash the entire network, so we reported to Ethereum Foundation. And we were able to get, I think $20,000 worth of bounty back then. But I’m sure it had been a lot larger if we found those bugs and reported these days, but yes.
Ian:
Things are certainly bigger, higher stakes these days. Talk to me about that though. So a bug that could’ve crashed the entire network. So this is something in the Ethereum node software that you would run if you were a validator, a miner at the time on the network. Do you remember what the vulnerability actually entailed?
Brian:
Yeah, so back then it was Geth. So it was written in Go. It was implementing the Ethereum network basically. And there was like a logic bug that could just kind of fault out, right? It raised an exception, and we can trigger it from the smart contract. So if you deployed a malicious smart contract, then it gets propagated.
Ian:
Yeah.
Brian:
And then, each node try to parse it, it’ll crash. So then, if each node gets crashed, that means the entire network gets down. Yeah.
Ian:
Yeah, the entire network goes down.
Brian:
Goes down.
Ian:
As the contracts replicated from node to node to node.
Brian:
Yeah, exactly.
Ian:
You just take out the entire network.
Brian:
Yep.
Ian:
That’s pretty incredible. What is the process of finding something like that? I’ve always wondered the nature of security research. Do you just pick a potential target and kind of like tinker around the edges until you find something? How do you go about discovering something like that?
Brian:
It depends. Sometimes we go from kind of bottom-up approach where we just start looking at the code, and reading code, and just try to understand what the entire software or hardware does. And then from there, we think of ways to break it. Right? What are some of the edge cases that developers may not have thought of?
Ian:
Mm-hmm.
Brian:
And those edge cases usually cause bugs. But if they do have security implications on it, then it becomes a security vulnerability. And then, sometimes we target specific features that are prone to be vulnerable. So when you parse media contents, that’s usually very complicated, hard to implement correctly. So we just focus on that feature specifically.
Ian:
Yeah, I stumbled across a great example of this actually as I was doing research for the show. Anybody that’s got Telegram on their phone should probably be aware of this. I certainly wasn’t until I was thumbing through your Twitter feed. There was a vulnerability inside a Telegram where the client by default is set to auto-download media.
So if I send you a video, or a text message, or a voice message, your client automatically downloads it. But if I’m sending you a malicious file, I can have sort of remote code execution happen, I think, which is all sorts of bad, and can lead to things like device takeover, and other compromises, which is super scary in a world where people are carrying around lots of crypto on their phones, right?
Brian:
Yep. Yeah, once your phone or any device that you interact gets hacked, that means they can leak out private keys to your wallet and whatnot.
Ian:
Yeah.
Brian:
So it’s super scary.
Ian:
So talk a little bit about the work that ChainLight is doing today. I saw this picture that I think was at the Blue House, the president’s official residence, I think, in Korea. So you’re getting some pretty significant attention. Who are the clients that you’re working for? And what’s the type of work that takes up most of your time?
Brian:
So I guess we started our smart contract auditing work in 2017, 2018.
Ian:
Yeah.
Brian:
At the time, there were only a handful of security companies in the space. Developers really didn’t understand the different kinds of smart contract vulnerabilities, so it’s kind of easy to find critical issues.
Ian:
Yeah.
Brian:
So that’s when all the new DeFi protocols are coming out. But then as the pace kind of matured, new classes of vulnerabilities arose, and we had to kind of research and learn those as well, and make sure our clients were secure from the new types of vulnerabilities, new types of mistakes.
And so, we were constantly busy just either auditing code for clients or keeping up with the latest developments. Because in order to audit something and find security issues, you have to be very fluent in the context, right? You have to know new platform, trying to implement. And then, what are the inherent vulnerabilities that could arise while using that sort of chain or SDK smart contract code.
Ian:
What do you see as some of the most bleeding edge exploits in the smart contract world today? My sense as a non-expert is that certainly the attacker sophistication has gone up. The contracts have maybe also gotten a little more secure, but they also have more features and capabilities. So it seems like we’ve solved some of the low-hanging fruit, but we’re maybe exposing more surface area as contracts are trying to do more things. I’m curious your expert perspective on that.
Brian:
So I guess we can… Sorry, I was a little blanking out there. So you were asking…
Ian:
Just about what’s going on today versus what you saw maybe a year or two ago. Where are the threat actors? What are some of the novel or new exploit cases that you’ve seen come out recently or vulnerabilities that you’ve discovered when you’ve been doing audits that you’ve been able to secure before they cause any harm as people are kind of advancing the state of smart contracts?
Brian:
Right. So kind of history rhymes and the same issues and same mistakes you think people now are aware of continue to happen over and over again.
Ian:
Yeah.
Brian:
Obviously, some of the vulnerability types or exploit types like Price Oracle Manipulation, that has been very popular in the past, but that doesn’t happen too often anymore, but it still happens time to time. And then obviously, some of the bridge incidents kind of shows you that now attackers are kind of targeting and shifting their targets from actual smart contract and financial engineering attacks to just directly attacking Web2 components as well.
So we were seeing some paradigm shifts, but attackers aren’t going to just give up on targets that still work, right? So they’re just going to go after where the money is. So anything with money is a big target for hackers. And unless security experts examine them correctly, they’re in a very risky position.
Ian:
Yeah. Now, you mentioned that as you’ve gained more and more expertise in Web3, you’ve diversified from just audits to actually building some of your own software to help your clients secure their infrastructure. Talk a little bit about what you’ve built.
Brian:
So as we mentioned, there are so many different classes risks and ways for things to go wrong, right? So it is kind of hard to enumerate them one by one. And a lot of these tokens, especially meme coins these days are just straight up scams or rug pulls. Sometimes they’re sophisticated, but a lot of times there are indicators in their code.
And the whole point of the moving things to blockchain and writing in smart contract is so that everything is written in code, and code doesn’t lie. So it is important to be able to analyze code correctly. But for us as humans to do that one by one, it’s just going to take forever. People are building faster than the speed that you can actually audit and analyze.
Ian:
Yep.
Brian:
So what we have built in ChainLight is something called DART, so Digital Asset Risk Tracker. And we currently track around 60 different risk factors ranging from ownership verification to liquidity risks. All automatically from analyzing code as well as on-chain data. So because any compromise in these areas could result in unauthorized token creation, or price volatility, or security breaches.
When we deal with these unique challenges posed by new tokens and meme coins, which often lack thorough audits or are highly speculative, we focus on the ownership risk, and make sure you can’t mint more than what the white paper actually says though, like those kind of risks. But we also look at some of the vulnerabilities that we found in the past that arise from the buggy code. We can analyze those automatically using static analysis.
Ian:
That’s super cool. How are your customers using that? I know you’re working with some exchanges. Does this become part of their listing criteria where they’re consulting DART to make decisions about, am I going to put dogwifhat on the exchange or not?
Brian:
Yeah, so we are hoping. And as you said, we already have some customers. But we are hoping the most common user of our DART will be a careful ecosystem participants who want to assess real risks and potential threats before they invest or interact, right? So currently, our customers include major cryptocurrency exchanges as they rely on it to accurately monitor listed tokens, as you mentioned, and ensure customer protection, and kind of strategically curate tokens for future listings as well.
So they are always constantly looking at already listed tokens to see if they are still safe for their customers. And they’re constantly also looking for new tokens to list, but they don’t want to list any random coins that could be dangerous. So we have customers who are using DART to kind of gauge that. And then, project builders are also using our products because they want to see in real time what they have built is secure, right?
Ian:
Yep.
Brian:
So it may not be as thorough as actual manual audits as of right now, but it can still give them a good guidance and kind of good gauge of how secure their current code is. So you can kind of think of it as more like a scanner that runs every time you deploy code, and make sure you don’t have any blatant mistakes and risks that you’re opposing.
Ian:
Yeah. Now, I think there’s something like 8 million tokens on the Ethereum network alone. Like order of magnitude, that’s directionally correct. I think there’s thousands created a day. How do you decide what shows up in the DART dashboard?
Brian:
Yes. At the moment, so we enumerated all the tokens that are available in the Ethereum network, an EVM compatible network.
Ian:
Yeah.
Brian:
And there are a lot.
Ian:
Yeah.
Brian:
So we can’t possibly have everything… We could, but it’s not going to be useful because there’ll be a lot of nonsense.
Ian:
There’s a ton of junk out there. Yeah.
Brian:
Yeah. So what we have decided is that any tokens that are active. So it has circulating volumes, there are interactions and transactions that are happening in the network regarding that token. And then, it has certain amount of value. So is it actually over a certain value that’s traded in major cryptocurrency exchanges, whether that’d be DeFi or centralized exchange, right?
Ian:
Yeah.
Brian:
So we have some criteria on selecting those tokens, so that way people can focus on somewhat more real and active projects.
Ian:
Yeah. We did some research here at Chainalysis because we were curious about kind of the rug pull scam activity that was happening with token creation. And we looked at sort of the pool of all tokens created last year. And amazingly, it’s a very large number, but only a relatively small percentage ever even get listed on a DAX.
Brian:
Right.
Ian:
And then, even a smaller percentage of those ever have more than, say, $300 of liquidity.
Brian:
Yeah.
Ian:
But interestingly, a very large portion of the ones that get above $300 of liquidity, about half exhibit behavior that’s kind of consistent with what you would appear to think is a rug pull, where the person who initially created the token ends up withdrawing all of the real liquidity, right? All the stable coins that are traded against the token pair. And there’s some people who are doing it kind of prolifically. So I love what you’re building with DART. Giving people that kind of asset intelligence view is really powerful. It’s something that I think is lacking the ecosystem right now.
Brian:
Yeah.
Ian:
So what else are you working on? Is there other software that’s in the works besides DART?
Brian:
I mean, we have Relic Protocol, but not a-
Ian:
Yeah, tell me about that.
Brian:
Okay. It’s not really a security solution.
Ian:
Okay.
Brian:
Even though we’re a cybersecurity company, we also decided to build something. Because we’re in Web3, we’re builders as well. But it’s interesting because it’s the first trustless oracle for Ethereum’s historical data. It basically enables the dApps, so smart contracts, to be able to access all the Ethereum historical data with maximal security and minimal gas costs without trusting any centralized authority.
So right now, for instance, if you want to be able to access more than latest 256 blocks, you can’t. And in order to get that kind of insights, someone has to feed those informations to the chain real time. And who’s going to do that, right? And do you have to trust them? And that builds a centralization risk. So we’re trying to build a trustless oracle where you don’t have to trust anyone, but the math.
Ian:
Yeah.
Brian:
So we use zero-knowledge technology to basically be able to prove that in this certain time in the past, in certain block, there has this storage slot, had this kind of value, and you can mathematically prove the fact. And then, you can verify that in the dApp.
So that way you can do things like, oh, this account in Ethereum had this transaction. So they sent this amount to this cryptocurrency exchange, or it interacted with this, or had this NFT at that time. You can kind of do that sort of validation in a smart contract without relying on any trusted setup.
Ian:
What would be the scenario where I would want my smart contract to be able to make that validation? I can see how a human might be interested in that, if I was trying to verify that a particular wallet had owned an NFT at a particular point in the past. But what’s the context or common use cases that you would imagine where a smart contract would want to be able to do that same validation?
Brian:
Yeah. So one of the simplest, I guess, example that we can think of is airdrops, right?
Ian:
Yeah.
Brian:
Airdrops happen all the time. Right now, the way it works is you basically look at it in the off-chain and build a whitelist, right?
Ian:
Yeah.
Brian:
And then, push that back into the chain. And then, there’s a deployer contract that just kind of distributes those based on this whitelist.
Ian:
Yep.
Brian:
Well, in the process, you could kind of snuck in your friend’s wallet address or your own wallet address.
Ian:
I think that happens all the time probably.
Brian:
Right. And people could see that after the fact, right?
Ian:
Yep.
Brian:
It’s going to be at the blockchain, so you can challenge them afterwards, but that’s kind of already too late at that point. And then, you kind of have to trust the entity that’s building this whitelist and whatnot and deploying these. But whereas, if you use Relic Protocol, then you don’t really need this middle step. You can just say, “Hey, did you actually own this NFT at this snapshot?”
Ian:
Yeah.
Brian:
And you can validate that and just kind of reduce that risk.
Ian:
I can imagine that being really useful too, in the context where if I have a valuable NFT, I don’t necessarily want to keep it in my hot wallet.
Brian:
Right.
Ian:
I hear all these stories about people who transfer something back to a hot wallet in order to then claim the airdrop, but there’s a wallet drainer that had been lurking there for months, and all of a sudden they lose everything. So I could imagine using this in the same way.
Brian:
That is a very good point. Yes.
Ian:
Right? To validate point in time historical ownership. That’s really cool. And Relic Protocol is free for anybody to use?
Brian:
Yeah, it’s free right now for anyone to use. We have SDKs. We have documentations if people are curious about how to use them. We have some examples too, demos.
Ian:
Yeah.
Brian:
And then, we also imagine this to be a building block for reputational systems, right? I don’t think we’re going to have actual like a strict KYC like the current TransFi financial institutes where you have to send your passport, or driver’s license, have one-to-one mapping to your actual identity to an account, because that’s not going to happen. Web3 isn’t about that, right? But I think it is still important to basically know how reputable you are as an account holder. We may not need to know who you are, but then we want to know what you have been doing-
Ian:
Yeah.
Brian:
… in the chain, right? So this way what you can do is… Right now, Web3 has been like Wild West where anyone can participate and anyone can attack. So imagine you have a DeFi protocol that’s open and you have a lot of PBL, but you’re kind of concerned that some of the attackers may interact with your DeFi app, and find bugs, and eventually exploit them. If you use things like Relic Protocol, you can basically kind of gate it saying that, “Oh, we only allow accounts that has existed at least a year in Ethereum.”
Ian:
Yeah.
Brian:
Or at least you have transacted this much or transacted with some cryptocurrency exchanges because then if something bad happens, then you can kind of have some strings to pull that sort of stuff. So building reputational systems on top of this is possible without having a centralized entity.
Ian:
Yeah. You don’t need to go back to a service who’s collecting all that data. You don’t recreate the credit bureaus in some sort of on-chain fashion.
Brian:
Right. You’re basically kind of showing your own proof that you’re safe and you’re believable. And then, you can do something interesting in DeFi as well where you’re like, “Oh, if you interacted with us a lot more, then we can give you better interest.” That sort of stuff. But then, now you don’t have to rely on a centralized system or the centralized feed. Customers could directly prove that they are… Sorry, I can’t think of the word. But yeah, they’re allowed to do that.
Ian:
That they’re not malicious probably, right?
Brian:
Yeah, yeah.
Ian:
Yeah. You’re trying to verify that I’m safe. It starts to make me think about the privacy pools paper that got published last year, which is kind of down this path of social proof over verified identities in order to justify. I’m curious, have you seen any customers who have actually tried to implement what you described, this idea of blocking wallets that are too new, or that have obvious historical interactions that would lead you to expect them to be malicious?
Brian:
I think there have been some projects that try to implement this idea.
Ian:
Yeah.
Brian:
But I think most of them kind of they rely on the centralized database basically from the off-chain analysis. So think of it as like, you query like Etherscan, and see they have this historical data. But then, I don’t think I have seen any decentralized, like trustless setup where they use things like Relic Protocol to build this.
Ian:
Yeah. Very interesting. So one thing maybe on a totally different tangent. You’re in Korea. I think for listeners, we have listeners from around the world. But for people who are maybe less familiar with the Korean landscape. My understanding is that Web3 and crypto has played a fairly big role in recent presidential elections. It’s become a political topic of maybe one-upsmanship between the candidates. If you don’t mind, share some perspective on the current state of opinion and the regulatory political climate around crypto in the country.
Brian:
I think Korea has been very cautious, I want to say, about kind of letting crypto expand, let’s say. So there’s regulatory restrictions around ICOs and everything like that. So a lot of Korean builders are actually having entities in Singapore, business entities. They’re residing in Korea and working on it, but there are legal entities in Singapore and whatnot.
But Korean government is slowly catching up, and they are trying to learn more about the technology itself. Because before that, their atmosphere was like, “Oh, it’s just a scam. It’s just digital currency. It’s nothing new.” But I think they’re catching up on more of an actual technology and what this can bring to the new world, basically.
Ian:
Yeah.
Brian:
But I think it’s still very slow in my opinion, compared to other countries like US. We say the US is slow, but then Korea I think is slower. And I think they’re being very cautious, and looking around what other countries are implementing, and trying to learn from their experience to implement something for Korea.
Ian:
Yeah. Well, and then I would imagine the Terra Luna collapse and Do Kwon’s role in that probably threw some cold water on much of the enthusiasm.
Brian:
Yes.
Ian:
Talking about criminals in crypto, as I mentioned, Do Kwon. How do you think about white hat hackers? I know that your team’s participated in sort of lots of Capture The Flag events. You found lots of vulnerabilities, kind of zero-days.
Brian:
Mm-hmm.
Ian:
But I think there’s a culture it seems of white hat hacking in crypto. What is your take on that? Are they helpful? Or do you ever collaborate with white hat hackers? Or are these sort of vigilante justice where you kind of prefer they weren’t in the ecosystem?
Brian:
Yeah. So it’s a little bit interesting because the term white hat hacking or white hat and black hat had existed before crypto.
Ian:
Sure.
Brian:
And I think in crypto, it means a little bit different sense.
Ian:
Yeah.
Brian:
But white hat hackers are ethical hackers who identify and report vulnerabilities, usually with permission, right? You tell them that you’re going to be looking at that, and find bugs, and will report if they find any issues, and following legal and kind of ethical guidelines to ultimately improve security. But in crypto land, sometimes they call it white hat, but then they’re kind of demanding bounty for it. It’s more like a ransom to me.
Ian:
Yeah, exactly.
Brian:
Yeah. So if you’re a true white hat, obviously it shouldn’t be dependent whether you get 10%, 20% bounty. You could get nothing, but you’re still be willing to report and give back, right?
Ian:
Yeah.
Brian:
So I think the term kind of got a little, I don’t know, what’d you say?
Ian:
The black hats co-opted the white hat.
Brian:
Yeah.
Ian:
Hey, I’m going to rob you.
Brian:
Right.
Ian:
But I’m going to tell you I’m a white hat, and you give me 10% of what I stole, and I’ll return the rest. It’s never set particularly well with me.
Brian:
Right.
Ian:
I don’t think that model-
Brian:
It should have been the other way around where we’re like, “Hey, I found this vulnerability. But in order to make the fund secure, I took it. But now, I’m willing to give back a hundred percent.” But then, it’s the project team’s onus on, “Okay, we’re grateful. We’ll provide 10% or whatever it is as an appreciation.”
Ian:
Yeah.
Brian:
But then, there has been always argument in the TradFi or traditional security realm where, okay, like bug bounties. There’s no more free bugs. It has been the case where security researchers found these bugs, and they’re kind of, I don’t know, expected to hand over these vulnerabilities because why wouldn’t you want to do that?
Ian:
Yeah.
Brian:
But then when you think about it, it takes skill and time to find these issues, and not being rewarded for that is also disincentivizng. So there needs to be some way to incentivize and have people be more enthusiastic about finding these issues, and kind of bring it to the light, and report them rather than using it for own illegal financial gains.
Ian:
Yeah.
Brian:
I don’t know. It’s a hard challenge, I guess.
Ian:
Well, I don’t know if you’re following the news. But just this week as we’re recording, the Mango Market’s flash loan hacker was convicted.
Brian:
Yep.
Ian:
And this is exactly the scenario that we’re talking about, where he very publicly told everyone he discovered a vulnerability, and then he went and exploited it as he described it on Twitter. Later, I think, he executed a very profitable trading strategy.
Brian:
Yeah. It was a successful arbitrage, right?
Ian:
Yeah, yeah. And I think initially made off with a 100 million in gains. And then, later returned some of the money in order to… What he thought, I think was negotiate a non-prosecution agreement with Mango. The US Department of Justice disagreed, and said that he was negotiating with the wrong people, I think.
Brian:
Yeah.
Ian:
So whether or not you believe that code is law or law is law, you can probably end up on either side of this argument. But my hope, I think, is that that case sort of dissuades the people that are like, “Well, I’m not going to hope for a bug bounty. I’m just going to take the money, and then pay myself a bounty, and return the rest.”
Brian:
Right.
Ian:
It seems like that behavior maybe is discouraged with this conviction. I don’t know if you have a different opinion.
Brian:
Yeah, I completely agree. I mean, Web3 was a Wild West and no one thought that law enforcement was paying attention or cared. That has clearly changed. And law enforcement is showing through a clear action that they will pursue these cases. And anyone thinking about being a black hat needs to be seriously reconsider the consequences. And as you mentioned, returning the stolen funds minus 10% fee is not a valid solid defense anymore.
Ian:
Yeah.
Brian:
So yeah, ultimately this is good for the space, in my opinion. Security is a multilayered. So law enforcement, discouraging black hat attacks definitely helps drive these folks who might be tempted towards bug bounties. But we also should not pretend that law enforcement’s going to solve the problem, right?
Ian:
Yeah.
Brian:
It only helps, but it doesn’t completely solve the problem. There’ll always be attackers outside the reach of the law, like North Korea. They don’t really care whether law enforcement is pursuing this or not, right? So we should definitely focus on making Web3 secure also by technological improvements.
Ian:
I am curious about a blog that your team wrote recently that was titled Ticking Time Bombs on Interoperability Protocols. And I’m particularly interested because I actually had the chief product officer from Chainlink on the podcast recently.
And this was one of the things that we talked about quite a bit, because it seemed like it was getting us away from the need for bridges which had been kind of notoriously exploited. Most famously with Axie Infinity, but certainly a number of other bridge attacks yielding big rewards for our North Korean hacking friends, Lazarus Group.
But it sounds like you’re calling out that architecture as having some significant security risks. Can you maybe take us through what you discussed in the blog?
Brian:
Yeah, so the bridges themselves have been very notorious for the target because you’re trying to interconnect different chains. They’re very different. So we needed some infrastructure and Web2 components, and that kind of inherently involved key management and whatnot. And we as humans have been very bad at it. And the attackers have been targeting this. So Interoperability Protocol is, I think, more of a generalized concept of cross-chain bridges.
So we may not call it bridges, but in concept, it’s basically a generalized form of it. But here, while bridges operations are limited to mostly token transfers, Interoperability Protocol support calling addresses or contracts from other blockchains that work through messaging. So it’s more of a kind of abstract concept that will more smoothly connect different chains together. And you said you interviewed the Chainlink person?
Ian:
Yeah, Kemal El Moujahid, the Chief Product Officer. Yeah.
Brian:
Okay. So Chainlink’s Cross-Chain Interoperability Protocol, I believe. CCIP.
Ian:
Yeah, CCIP. Yep.
Brian:
Yeah, has been one of the most prominent protocol here. And CCIP has implemented very strong off-chain features to mitigate some of the security risks. These include risk management network that validates messages and detect anomalies. But for general Interoperability Protocols, key focus areas for audits. So if we were to audit these type of protocols would be estimating gas fees.
People could abuse those, and attackers could abuse those, and cause a lot of damage. And handling finality issues of the destination chains, because reworks happen all the time. You want to make sure every chain, they’re on the same page, and preventing duplicate message execution, and whatnot. So those still need to be carefully handled. But again, compared to traditional bridge methods, this kind of gives you more flexibility and smooth operations between different chains.
Ian:
So you like the fact that we’re moving from sort of the bridge architecture to the Interoperability Protocol?
Brian:
I personally don’t really have a, I guess, this is a good thing or a bad thing that we’re moving towards this, but I think this is more of a paradigm that we’re experiencing and moving towards. And as I said, in my opinion, the Interoperable Protocol is more of a generalized form of bridges. So bridges are not going away, right? There are so many chains that out there that we need some ways to bridge them together to make this more usable and bigger and more sustainable. So this is just one attempt, I feel like. But I could imagine other attempts would be made to make this more secure and transparent.
Ian:
Yeah. It’s definitely going to be interesting. It feels like as we keep innovating, we’re just increasing the attack surface that hackers have the opportunity to compromise. I’m curious, as you think about the landscape today. What’s really keeping you up at night? Where do you see the biggest potential vulnerabilities that are kind of going unaddressed that people should be spending more time thinking about than they are?
Brian:
I think the fact that these potential attacks haven’t been really decreasing. It’s only been increasing.
Ian:
Yeah.
Brian:
The total damage or dollar value might not have increased significantly year to year anymore. But still, these incidents happen all the time. We actually have a small product called Lumos. So you can check it out at lumos.chainlight.io, where we keep track of all the recent incidents, including hacks, rug pulls, and everything. So it’s similar to the DefiLlama’s Hacks page. But Lumos actually gives you a little bit more details, summary of what the incident was. So how it’s hacked and what kind of vulnerability it was.
And then most importantly, we wanted to track what happened before the incident and what happens after. So was it audited before? And then, once it has hacked or rug pulled, was that in scope of the audit report? Because sometimes, auditors audit them, but then the project builders change their code or deploy new code without getting audited again. And then that code was buggy, for instance.
Ian:
Yep.
Brian:
Then, it was out of scope. Or some auditors may mention the risk, but then the project team would just acknowledge and just move on, and then the incident happens. So we wanted to see that kind of flow, have a little bit more details about each incident. And then also, they sometimes come up with recovery plans or compensation plans. They say it, but then, do they actually follow up on their execution plans?
Ian:
Yeah.
Brian:
We wanted to track that as well. So there’s a site for that. But I don’t think 2024 isn’t going to be vastly different from previous years.
Ian:
Yeah.
Brian:
Potential attack hotspots may include smart contract vulnerabilities, cross-chain bridge attacks, governance manipulation, and so on. And of course, as you mentioned early on, social engineering and phishing attacks are just ever growing. A lot of people are trying really hard to combat that, but these drainers and phishing attempts through ad campaigns. If you look at just Twitter, you see all the ads that are just scam and drainers.
Ian:
Absolutely. Yeah.
Brian:
So that’s just going to grow and cause more harm. So we have to act fast, and we need a lot more alliance, and people who care about security.
Ian:
Yeah. Your ChainLight is part of the SEAL Team, right?
Brian:
Yes.
Ian:
Yeah. Maybe talk a little bit about that initiative because I think that’s a pretty important thing that the communities come together around.
Brian:
Yeah. So SEAL has become really big initiative at this point. A lot of big names, people who have impact have been helping, and get together to actually make this place a lot safer. We have one goal is to make Web3 secure. And there are a lot of participants with different roles. The way we first got involved with SEAL is that we had this issue where we found vulnerabilities in some of the protocols, and we were able to write an exploit. And sometimes it’s ongoing exploit as well, but we’ve seen variant attacks.
So there’s one attack that was known, but then it’s public already. So the attackers are also trying to copy that attack to other platforms and other protocols. But we already had exploit to white hat hack it. But then, we didn’t want to do it without their consent, the project’s consent. So we tried to reach out to these protocols, but it was really hard. It took hours before we were connected. And by the time, it was already drained.
Ian:
Too late.
Brian:
So then we were like, “Oh, should we have been just white hat hacking before this happened?” But then, that would cause other miscommunications and all the hassle. So we weren’t sure what to do about those. And then, the fact that it took hours after these public informations are out to get connected with these protocol managers was just outrageous. Oh, sorry. I think I was disconnected for a second. Are you there?
Ian:
Yeah, you’re good. Keep going.
Brian:
Okay. Yeah. So SEAL basically have very well-connected people. People, us, like auditors who work with protocols directly, but then there are also people from centralized exchanges that have connections. There’s law enforcement people. So that way, once we know about the incident, we can act really fast and more effectively together. So that’s the initiative.
And yeah, so SEAL now has a Safe Harbor program as well. So I think a lot of really smart people who care about security, who care about this ecosystem being more sustainable and durable are working together to build alliance basically. So SEAL is a Security Alliance, so we’re basically a strong alliance to keep this ecosystem safe.
Ian:
Yeah, it’s a fantastic setup. Chainalysis also has some of our investigative team participating.
Brian:
Yes.
Ian:
There’s a ton of really smart folks who are involved. And I love it because it’s bringing the community together.
Well, this has been a fantastic conversation, Brian. It’s great to know that there’s smart folks like you and your team who are out there protecting all of us in Web3. Any last thoughts for the audience before we wrap tonight?
Brian:
I mean, stay safe. I mean, don’t click on any random links that you see just because they give you airdrops, right? I think it starts from small awareness that everyone needs to care and actually think of these things before they act, right? That’s the only way to keep you safe. Yeah, be cautious.
Ian:
Yeah, fantastic advice. Be cautious. Stay safe. I love it. Thank you so much, Brian. This was a really fun conversation.
Brian:
Yep. Thank you for having me today.