Masks and simple photographs are enough to fool some facial recognition technology, highlighting a major shortcoming in what is billed as a more effective security tool. 

The test, by artificial intelligence company Kneron, involved visiting public locations and tricking facial recognition terminals into allowing payment or access. For example, in stores in Asia—where facial recognition technology is deployed widely—the Kneron team used high quality 3-D masks to deceive AliPay and WeChat payment systems in order to make purchases. 

Those systems, which resemble the ones seen in airports, use a person’s face rather than a PIN or a fingerprint to validate user’s identity. Such masks, in theory, could allow fraudsters to use another person’s face—and bank account—to go shopping.

More alarming were the tests deployed at transportation hubs. At the self-boarding terminal in Schiphol Airport, the Netherlands’ largest airport, the Kneron team tricked the sensor with just a photo on a phone screen. The team also says it was able to gain access in this way to rail stations in China where commuters use facial recognition to pay their fare and board trains.

The transportation experiments raise concerns about terrorism at a time when security agencies are exploring facial recognition as a means of saving money and improving efficiency. In the case of the payment tablets, the ability to fool WeChat and AliPay with masks raises the specter of fraud and identity theft.

Schiphol Airport, We Chat, and AliPay did not respond to requests for comment about the effectiveness of their facial recognition technology. 

In the case of the masks, the deceptions worked because the facial recognition system already contained an image of the person on whose face the mask was based. Kneron acknowledges, however, that such fraud is unlikely to be widespread because the ones used in the experiment were obtained from specialty mask makers in Japan. But the San Diego-based company notes the technique could be used to defraud famous or wealthy individuals. 

“This shows the threat to the privacy of users with sub-par facial recognition that is masquerading as “AI”.” Kneron’s CEO Albert Liu said. “The technology is available to fix these issues but firms have not upgraded it. They are taking shortcuts at the expense of security.” 

Kneron conducted the experiments to learn about the technology’s limitations while developing its own facial recognition technology. The company, which is led backed by high-profile investors including Qualcomm and Sequoia Capital, is creating what it calls “Edge AI,” an artificial intelligence tool that does the job of recognizing individual entirely on devices rather than though cloud-based services.

Kneron also noted that its experiments could not fool some facial recognition applications, notably Apple’s iPhone X.

The company’s experiment comes at a time of intense debate over how broadly to deploy facial recognition. Fortune writer Robert Hackett, for instance, recently wrote how he has declined to use the technology to enter the publication’s New York office, citing privacy concerns.

More broadly, the reliability of facial recognition and artificial intelligence has come under scrutiny. Computer scientists, in an experiment similar to Kneron’s, recently fooled face sensors using pictures from Facebook. And in a widely reported 2017 study, MIT researchers showed how Google’s artificial intelligence confused images of a turtle for a rifle. 

Meanwhile, artificial intelligence has also produced tools that can easily reproduce another person’s fingerprints—further underscoring how the biometric tools on which we increasingly rely are less secure than many people believe.