FACEBOOK has been accused of turning down the chance to use an artificial intelligence tool which could have helped the firm detect online hate speech in near real-time.

Executives from Finland-based Utopia Analytics – which has created an AI content moderation tool it says can understand any language – said Facebook turned down offers to use the firm’s technology in 2018.

The AI company said it offered to create Facebook a tool in two weeks that could help it better moderate hate speech content originating in Sri Lanka, amid rising tensions in the country and reports of more hate speech appearing online.

Appearing before MPs on the House of Commons Digital, Culture, Media and Sport select committee on disinformation, Utopia chairman Tom Packalen said that when it approached the social network at that time, Facebook was “not interested” in its technology.

READ MORE: Facebook to launch new cryptocurrency called Libra

Utopia says its tools are able to understand context as well as informal and slang language and can analyse previous publishing decisions made by human moderators to inform its decisions, which then take place in “milliseconds”.

In a further statement, Utopia chief executive Mari-Sanna Paukkeri said: “In March 2018 we showed Facebook that we could get rid of the majority of the hate speech from their site within milliseconds of it appearing.

“Facebook have repeatedly claimed that this technology does not exist but despite what they may say, we have been using it successfully for over three years in many countries and with many businesses.”

Paukkeri also claimed that if implemented, the tools could have made a difference in preventing or warning of the Easter terror attacks in Sri Lanka, which killed more than 250 people.

In the aftermath of the attacks, Sri Lankan authorities blocked social media amid concerns it was being used to incite violence in the country. Paukkeri said: “It is a shame that Facebook decided that their internal considerations were more important than getting rid of the inflammatory rhetoric that was posted on their site.”

In response, Facebook said AI was an important tool in content moderation but said more research was still needed into the issue. The company also pointed to its own technology as being capable of spotting and removing hate speech.