Close

Ethical AI in 2020: An Australian Perspective

Author picture

Tim Timchur, Managing Director, 365 Architechs, is a qualified accountant, cybersecurity professional and governance and risk management expert.

Tags

Sign up to the Digital Disruption to receive the latest news and updates

Ethical AI in 2020: An Australian Perspective

 

Artificial intelligence (AI) is here.  Smartphones, apps and sensors already make use of technologies that approximate the workings of the human brain, with the no longer unique ability to learn.

 

It promises improvements in productivity, financial savings and better decisions, but at what cost?  Should AI be regulated, or will legislation never keep up with the constant evolution of these technologies?  Will non-mandatory codes of conduct or principles-based rules cause corporations to act ethically?  What role do boards play in ensuring their organisations comply with their obligations, manage their reputations and act in the best interests of the communities that grant them their licence to operate?

 

Governments and organisations around the world are considering these questions today, with many making plans for various levels of involvement. 

 

In 2019, the Australian Government’s Department of Industry, Innovation and Science, together with CSIRO’s Data61, issued a discussion paper titled Artificial Intelligence: Australia’s Ethical Framework to consult with industry and the general public on the topic.  Eight voluntary principles were published as a result:

 

  • Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society and the environment.
  • Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
  • Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

 

Later in 2019, the Australian Human Rights Commission issued a discussion paper titled Human Rights and Technology with three key principles:

 

  • Regulation should protect human rights
  • The law should be clear and enforceable
  • Co-regulation and self-regulation should support human rights compliant, ethical decision making

 

Standards Australia Working Group IT043: Artificial Intelligence is currently working on the development of an Australian Standard for AI Ethics, of which 365 Architechs Managing Director, Tim Timchur, is a member.  A final report was issued in 2019 titled An Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard that included four goals and eight recommendations.

 

Around the world, countries are busily developing their version of a set of AI ethics principles, but there is little consistency between them.  COVID-19 and videoconferencing have recently taught us what a small planet we live on.  What will different rules for different countries mean for the global marketplace?

 

Questions for organisations today

Businesses standard to make considerable gains from the opportunities presented by AI, but must act within the law for the benefit of members, and increasingly, for the legitimate interests of the communities in which they operate.

 

Board directors, executives and management alike should ask:

 

  • What AI is currently in use in our organisations?
  • What opportunities could AI bring us today, tomorrow and in the future?
  • What AI risks should be included on our risk registers?
  • How are we addressing the ethical considerations behind AI development and application?

 

We live in a world of volatility, uncertainty, complexity and ambiguity. 365 Architechs provide a range of digital transformation, cybersecurity and artificial intelligence services to assist organisations in leveraging AI technologies in their organisations today.