⚫Security Considerations: A Defense in Depth Strategy
Web3 introduces an entirely new set of threat vectors. A non-exhaustive list of those most relevant to this product shape are identified below.
Overall Security Philosophy
The best way to go about securing a disparate threat surface is to deploy a defense-in-depth strategy. Defense-in-depth is a strategy commonly used in information security to create multiple layers of security such that the blast radius is minimized with fallback security layers if the initial layer is compromised. See Appendix A for an example of defense-in-depth used within the context of a consumer application.
Wallet Drainers
Wallet drainers leverage malicious smart contracts in combination with a fake or compromised website to lure the victim into signing transaction instructions that will transfer all the victim’s assets to the hacker. This exploit has become quite lucrative with known wallet drainer variants having stolen over $300 million in assets in 2023. Like the ransomware-as-a-service market, advanced persistent threat groups (APT Groups) are providing drainer-as-a-service affiliate models that allow subscribers access to the latest variants as well as resources like phishing kits or social engineering services.
OpenAgents AI will deploy best in class pre-execution control providers like Wallet Guard or Blockaid by default. Such providers will simulate smart contracts in a controlled environment providing the user with validation that a transaction is benign. At scale, these security providers have enough data to use heuristics as well as machine learning to help validate whether a transaction is malicious or not directly on the client. In a multichain world, this is of the utmost importance since malicious smart contract implementations will have subtle differences per blockchain (e.g. programming languages, bytecode, and opcodes).
In this way, OpenAgents AI will prioritize user security out-of-the-box minimizing any chance that a user interacts with a malicious smart contract through malicious airdrops, scam tokens, address poisoning, etc.
Bad actor commits code with an exploit
In a creator-driven model where users can submit a build to be used by an entire network of users there are likely to be bad actors. There are reputation and slashing guardrails in place to disincentivize that type of behavior, but having an added layer of security is paramount to the integrity of the platform. Builds that are submitted to OpenAgents AI will undergo static analysis via automation as well as dynamic analysis with sandboxing capabilities as part of an exhaustive security review process. This strategy is akin to how the Google Play Store reviews new applications submitted to be on the Android mobile operating system.
Vulnerabilities in third-party dependencies
For a product like OpenAgents AI, the threat of a security incident occurring within the software supply chain is perhaps one of the hardest types of threat to prevent. Proactive measures must be taken to minimize the overall attack surface especially when leveraging third-party software libraries. OpenAgents AI will deploy best security practices can help mitigate such a risk:
The OpenAgents AI team will undergo a risk assessment with strict security requirements when determining the eligibility of direct dependencies to be used on the platform. Note there is a slight difference between this, and builds submitted by creators on the platform.
All platform dependencies will be tracked and regularly scanned for vulnerabilities.
Whenever possible, the OpenAgents AI team will have open lines of communication with product development teams who directly contribute to the third-party components used in the platform. This also applies to creators.
On rare occasions, threat actors (usually APT Groups) will utilize a zero-day vulnerability, or a security vulnerability that is previously unknown or unaddressed. When a zero-day is in effect, response and mitigation plans are of the utmost importance. The OpenAgents AI Security Team has an exhaustive list of security runbooks to help reduce the time to deploy a fix if such an issue were to occur. See the Appendix for an example of a security runbook.
Shared Responsibility Model
A shared responsibility model helps distinguish the different roles and responsibilities of users, creators, as well as the platform owner (i.e. OpenAgents AI) so that the highest degree of security is maintained.
OpenAgents AI responsibility “Platform Security”
OpenAgents AI is responsible for protecting the platform layer which also includes securing the LLMs that makeup a diverse multi-agent network.
Platform security expands upon the security guarantees inherited by the cloud infrastructure or blockchain network provider and includes all the necessary configuration and management tasks. An example of properly configuring a compute instance in the cloud would involve using an approved list of compute images (i.e. golden image). Additionally, if an instance with a known vulnerability was in production it would require strategically patching the instance in a timely manner.
Securing LLMs are more nuanced. It’s still a very new security research area, but security concerns for a system like OpenAgents AI could include prompt injection, insecure output handling, sensitive data disclosure, and many more. An example of properly handling a prompt injection vulnerability (i.e. directly or indirectly prompting an LLM to unknowingly execute the attacker’s intentions) would be to sanitize and filter out malicious user prompts.
Creator responsibility “Agent Security”.
Creators are responsible for designing agents that safeguard their user’s security and privacy.
User responsibility “Data Security” **
Users are responsible for using good judgment when authenticating or sharing sensitive data with agents.
Conclusion
While the threat landscape is vast including threats and vulnerabilities at the network, cloud, and application layers, those are largely out of scope for this paper. Increasingly, security best practices are built-in to the underlying managed services (i.e. compute, networking, and storage) that make up the modern information technology era.
Appendix A
An example scenario showcases the effectiveness of a defense in depth strategy in-use:
A user’s account credentials were dumped as part of a large-scale breach that affected millions. A malicious actor gets access to those user credentials and writes a script to do credential stuffing at scale (i.e. using credentials from a data breach to log in to other unrelated services) to gain unauthorized access. Unfortunately, the user’s compromised credentials were consistent across the applications they used, so the malicious actor is able to successfully authenticate to the user’s banking application. Luckily for the compromised user, the banking application has adaptive authentication employed. When the unauthorized user logs in from a new device, the banking application requests a one-time-password sent to the user’s mobile device. Given the malicious actor does not have access to the compromised user’s mobile device, they are unable to make it beyond the adaptive authentication challenge and gain complete access. Additionally, the user is tipped off that a malicious actor has tried to log in with a one-time password for a session they didn’t authorize and changes the password. Although the example is a simple one, it depicts the effectiveness of a defense-in-depth strategy used by the banking application to protect against unauthorized access. Taking this specific defense-in-depth strategy a step further, the banking application could leverage a step-up authentication policy to protect an unauthorized user from completing write actions (e.g. sending money) with a different multi-factor authentication method.
Last updated