Artificial intelligence holds the potential to enhance efficiency, streamline processes, and improve decision-making across various government sectors. But without robust safeguards, there is a heightened risk of biased decision-making, privacy breaches, and misuse of sensitive data.
The Biden administration’s AI executive order establishes new standards for AI safety and security as well as for responsible use of AI at federal agencies and in state and local governments.
[ Learn about the role of AI and other new technologies in government acquisition in Government Acquisition Outlook: Experts Weigh In on the Future of Procurement. ]
The AI executive order issued by the US federal government touches on a wide range of subjects, and not everything in the order pertains to government procurement software. Some aspects of the executive order that do pertain to government procurement include:
The government supports continuous, responsible innovation in AI, allowing for the development of cutting-edge solutions that can address complex challenges in various sectors. This commitment to innovation positions the government at the forefront of technological progress and emphasizes its intention to remain adaptive and efficient in the face of evolving demands.
The Appian Government Acquisition and Appian eProcurement solution suites were created to help government agencies do just that. These process automation suites, built precisely for the unique needs of federal and state and local procurement, respectively, have a high level of built-in flexibility. Enough to make them quickly adaptable to the latest advances so government organizations can harness the full potential of AI as it matures and becomes more powerful over time.
Training an AI model on certain data sets can inadvertently lead to biases—for a particular type or size of supplier, for example. Organizations have to be very careful with the data set they use for training. They also have to set up monitoring mechanisms, just like before the advent of AI.
By doing this, organizations can mitigate bias and increase the likelihood that contracts are awarded based on merit and capability, fostering transparency and public trust in government procurement.
Procurement processes often involve sensitive data, the disclosure of which could have significant consequences. Vendor data provided in quotes, for example, may include proprietary information that the government has been entrusted to protect. No vendor wants to see that data suddenly show up in a commercial AI model that was trained on it. Or find out that information about future procurements somehow got out, leading to vendors who accessed that information having an unfair advantage.
Using a private AI model can solve these issues. Private AI refers to methods of building and deploying AI technologies that respect the privacy of an organization's data.
With private AI, data never leaves your control. The organization providing the AI services will not share your data or use it to train their own models. Private AI models are trained on your data and your data only, so the outputs they provide are tailored to your organization, making them far more accurate and uniquely valuable.
Many local, state, and federal agencies had issued their own AI policies even before the executive order, and now many more have followed suit. What we can be certain of is that AI regulations will continue to advance, with the goal of making AI a tool for good—opening up new possibilities and streamlining processes—not for perpetuating or fortifying prevailing biases and inequities.
See the AI-powered processes in Appian government acquisition and eProcurement software solutions that accelerate and streamline the procurement lifecycle.