How to Practically Reduce Indie AI Tool Security Risks
Having explored the risks of indie AI, Thacker recommends CISOs and cybersecurity teams focus on the fundamentals to prepare their organization for AI tools:
1. Don’t Neglect Standard Due Diligence
We start with the basics for a reason. Ensure someone on your team, or a member of Legal, reads the terms of services for any AI tools that employees request. Of course, this isn’t necessarily a safeguard against data breaches or leaks, and indie vendors may stretch the truth in hopes of placating enterprise customers. But thoroughly understanding the terms will inform your legal strategy if AI vendors break service terms.
2. Consider Implementing (Or Revising) Application And Data Policies
An application policy provides clear guidelines and transparency to your organization. A simple “allow-list” can cover AI tools built by enterprise SaaS providers, and anything not included falls into the “disallowed” camp. Alternatively, you can establish a data policy that dictates what types of data employees can feed into AI tools. For example, you can forbid inputting any form of intellectual property into AI programs, or sharing data between your SaaS systems and AI apps.
3. Commit To Regular Employee Training And Education
Few employees seek indie AI tools with malicious intent. The vast majority are simply unaware of the danger they’re exposing your company to when they use unsanctioned AI.
Provide frequent training so they understand the reality of AI tools data leaks, breaches, and what AI-to-SaaS connections entail. Trainings also serve as opportune moments to explain and reinforce your policies and software review process.
4. Ask The Critical Questions In Your Vendor Assessments
As your team conducts vendor assessments of indie AI tools, insist on the same rigor you apply to enterprise companies under review. This process must include their security posture and compliance with data privacy laws. Between the team requesting the tool and the vendor itself, address questions such as:
- Who will access the AI tool? Is it limited to certain individuals or teams? Will contractors, partners, and/or customers have access?
- What individuals and companies have access to prompts submitted to the tool? Does the AI feature rely on a third party, a model provider, or a local model?
- Does the AI tool consume or in any way use external input? What would happen if prompt injection payloads were inserted into them? What impact could that have?
- Can the tool take consequential actions, such as changes to files, users, or other objects?
- Does the AI tool have any features with the potential for traditional vulnerabilities to occur (such as SSRF, IDOR, and XSS mentioned above)? For example, is the prompt or output rendered where XSS might be possible? Does web fetching functionality allow hitting internal hosts or cloud metadata IP?
AppOmni, a SaaS security vendor, has published a series of CISO Guides to AI Security that provide more detailed vendor assessment questions along with insights into the opportunities and threats AI tools present.
5. Build Relationships and Make Your Team (and Your Policies) Accessible
CISOs, security teams, and other guardians of AI and SaaS security must present themselves as partners in navigating AI to business leaders and their teams. The principles of how CISOs make security a business priority break down to strong relationships, communication, and accessible guidelines.
Showing the impact of AI-related data leaks and breaches in terms of dollars and opportunities lost makes cyber risks resonate with business teams. This improved communication is critical, but it’s only one step. You may also need to adjust how your team works with the business.
Whether you opt for application or data allow lists — or a combination of both — ensure these guidelines are clearly written and readily available (and promoted). When employees know what data is allowed into an LLM, or which approved vendors they can choose for AI tools, your team is far more likely to be viewed as empowering, not halting, progress. If leaders or employees request AI tools that fall out of bounds, start the conversation with what they’re trying to accomplish and their goals. When they see you’re interested in their perspective and needs, they’re more willing to partner with you on the appropriate AI tool than go rogue with an indie AI vendor.
The best odds for keeping your SaaS stack secure from AI tools over the long term is creating an environment where the business sees your team as a resource, not a roadblock.
Resource : https://thehackernews.com/2023/11/ai-solutions-are-new-shadow-it.html