This post was originally published on this site
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Chief Information Officers and other technology decision makers continuously seek new and better ways to evaluate and manage their investments in innovation – especially the technologies that may create consequential decisions that impact human rights. As Artificial Intelligence (AI) becomes more prominent in vendor offerings, there is an increasing need to identify, manage, and mitigate the unique risks that AI-based technologies may bring.
Cisco is committed to maintaining a responsible, fair, and reflective approach to the governance, implementation, and use of AI technologies in our solutions. The Cisco Responsible AI initiative maximizes the potential benefits of AI while mitigating bias or inappropriate use of these technologies.
Gartner® Research recently published “Innovation Insight for Bias Detection/Mitigation, Explainable AI and Interpretable AI,” offering guidance on the best ways to incorporate AI-based solutions that facilitates “understanding, trust and performance accountability required by stakeholders.” This newsletter describes Cisco’s approach to Responsible AI governance and features this Gartner report.
At Cisco, we are committed to managing AI development in a way that augments our focus on security, privacy, and human rights. The Cisco Responsible AI initiative and framework governs the application of responsible AI controls in our product development lifecycle, how we manage incidents that arise, engage externally, and its use across Cisco’s solutions, services, and enterprise operations.
Our Responsible AI framework comprises:
- Guidance and Oversight by a committee of senior executives across Cisco businesses, engineering, and operations to drive adoption and guide leaders and developers on issues, technologies, processes, and practices related to AI
- Lightweight Controls implemented within Cisco’s Secure Development Lifecycle compliance framework, including unique AI requirements
- Incident Management that extends Cisco’s existing Incident Response system with a small team that reviews, responds, and works with engineering to resolve AI-related incidents
- Industry Leadership to proactively engage, monitor, and influence industry associations and related bodies for emerging Responsible AI standards
- External Engagement with governments to understand global perspectives on AI’s benefits and risks, and monitor, analyze, and influence legislation, emerging policy, and regulations affecting AI in all Cisco markets.
We base our Responsible AI initiative on principles consistent with Cisco’s operating practices and directly applicable to the governance of AI innovation. These principles—Transparency, Fairness, Accountability, Privacy, Security, and Reliability—are used to upskill our development teams to map to controls in the Cisco Secure Development Lifecycle and embed Security by Design, Privacy by Design, and Human Rights by Design in our solutions. And our principle-based approach empowers customers to take part in a continuous feedback cycle that informs our development process.
We strive to meet the highest standards of these principles when developing, deploying, and operating AI-based solutions to respect human rights, encourage innovation, and serve Cisco’s purpose to power an inclusive future for all.
Check out Gartner recommendations for integrating AI into an organization’s data systems in this Newsletter and learn more about Cisco’s approach to Responsible Innovation by reading our introduction “Transparency Is Key: Introducing Cisco Responsible AI.”
We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Secure Social Channels