This post was originally published on this site
Artificial Intelligence (AI) is increasingly part of our everyday lives, and this transformation requires a thoughtful approach to innovation. Cisco is committed to delivering technologies and services by managing AI development in a way that augments our security, data privacy, and human rights focus – fostering a more inclusive future for all. Today, I am proud to announce Cisco’s Responsible AI initiative, a governance framework that guides internal development and provides a vital communication channel with our customers, partners, industry, and organizations. The Responsible AI initiative is a part of the Cisco Trust Center, a place where we work alongside our customers and suppliers to ensure responsive data-related processes and policies.
AI is inherently different than previous technologies and requires a more responsive approach to governance. For example, AI models are typically trained on data sets and automate the production of insights that can influence decisions and actions. This approach introduces potential issues, including bias, that can arise from inconsistent or incomplete training data sets. Additionally, some models derive output and insights that are based on machine-generated processes, limiting access to the underlying algorithm. These challenges are known to the industry, and there are continuous advances in AI that may address some of these concerns.
Cisco employs a human-centric approach to design and development that includes the processes used to evaluate new technologies. Our approach to designing responsible AI systems is focused on advancing the experience of our customers, partners, and the organizations they serve. The Responsible AI initiative serves two vital roles in the governance of new technologies. First, it defines internal processes to ensure a continuous assessment and management loop with our designers, developers, and partners. Cisco has established development guidelines, testing and response protocols and included them in the Cisco Secure Development Lifecycle. Second, the initiative is part of the Cisco Trust Center and expands Cisco’s communications channels and processes to include the governance of AI-related technologies, products, and services.
The Responsible AI Initiative is driven by a clear set of principles, furthering Cisco’s commitment to respecting and upholding the human rights of all people, as published in Cisco’s Global Human Rights Policy. Our Responsible AI Principles include transparency, fairness, accountability, privacy, security, and reliability in a way that is consistent with Cisco’s operating practices and directly applicable to the governance of AI technologies. Each principle includes concrete working practices and empowers customers to participate in a continuous cycle of feedback and development. See the Responsible AI Principles for more information.
Cisco is committed to a responsible and reflective approach to the governance of AI technologies based on continuous learning, policy setting, and observation cycles. Cisco will also participate in AI-related initiatives with other industry leaders, standards committees, and global government agencies. We invite you to participate in Cisco’s Responsible AI initiative. Your perspective and feedback will help us shape this technology and our products in a way that is supportive and equitable for all.
See other Cisco perspectives on Responsible AI:
- Artificial Intelligence: driving innovation while safeguarding ethics and privacy
- Designing responsible AI systems
Learn more at trust.cisco.com.
We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Secure Social Channels