How news organisations searching your Twitter for news

When I wake up in the morning, I like to start my day by watching the news. Usually my choice goes to RTL News, because they have a funny item called “media overview”. In this item RTL News takes a peek into the other media, and report about news stories which have appeared in newspapers and on the internet. In addition, RTL News often refers to stories that have found on social media sites.


Journalists are increasingly using social media, and consider social media content more and more as reliable sources (Coosto, 2015). With the enormous amount of content we produce each day, journalists will increasingly rely on the use of Artificial Intelligence techniques to analyze this huge amount of data and create news stories out of all our social content (Lokot & Diakopoulos, 2015). In this article we’ll take a look into Artificial Intelligence (or smart data) in data journalism, and the challenges that appear when using smart data techniques to analyze social media content for news stories.

According to Wilde (2010) Artificial Intelligence consists of advanced data engineering technologies, which can perform data modeling and process metadata analysis. It is a technique that can assists users in all kinds of tasks, such as planning, problem solving, decision making, sensemaking, and even predicting. Some examples of such Artificial Intelligence techniques are data mining (an analytical process to explore huge amounts of data), machine learning, and natural language processing (Flaounas, Ali, Lansdall-Welfare, De Bie, Mosdell, Lewis & Cristianini, 2013). Some news organisations nowadays are already aware of these developments and are using Artificial Intelligence to analyze large amount of data, such as content on social media websites, and to create news stories out of it.  


Wait a minute, robots are taking over the news!?
This seems a bit of an extreme statement but let me give you an example from The New York Times. The data science team from New York Times developed a tool that is capable of predicting and suggesting by performing metrics on content from social media. This tool is called Blossom and is able to provide specific information on the content of a certain article or blogpost, to reveal where the content came from and calculate how the content is performing on social media websites. Based on these metrics, the bot can predict and suggest which content is interesting enough to use and to create news stories for The New York Times.  

Screenshot 2015-11-19 at 13.41.18

This phenomenon of ‘news bots’, robotic accounts that are able to do reporting, writing, and data analysis automatically, can be used in many different ways. Because of the undiminished growth of data on the internet and especially social media, journalists will increasingly rely and use information from the digital environment. Although these bots provide intriguing opportunities for journalists and news organizations, it also raises questions about the limits and the risks as well (Lokot & Diakopoulos, 2015).


Aha! There are limits and risks with news bots
One of the main issues of automated news and the use of news bots in journalism is a possible decrease in in transparency. Results of the study from Lokot and Diakopoulos (2015) showed that a lot of news bots (45%) are not transparent about their sources, the algorithms they use, or about the fact that they are actually a news bot. This raises questions about reliability and trustworthiness of ‘automated’ news articles, and how journalists should cope with these developments.

A second issue that occurs in the use of news bots is the problem with accountability on created news content. Although a creator is likely to be accountable for created content in case of legal violation, if news is fully created by news bots than who is to be held accountable for the content? It is likely to say the mentioned author or the news organisations, although that is perhaps from a jurisdicted point of view impossible (Lokot and Diakopoulos, 2015).

And last but not least, there is the issue of ethics to kept in mind, especially in the use of automated news bots. A news bots might be taken an increasingly important role in judgment of content, news selection, and consumption. However, it seems impossible to teach a news bot to act ethically. A certain algorithm that understands ethics would need to be able to understand how humans can see a concern, why humans think something probably matters, and when humans might act on a certain event (Lewis & Westlund, 2015).  


So are journalists going to disappear?
So if I review my statement ‘Are bots taking over the news?’ I guess that this will partially be true. I believe the use of Artificial Intelligence tools will continue to grow in data journalism. Journalists will need these tools to be able to cope with all the data that is being produced on social media on a daily basis. But with some of the issues that are mentioned above, it is clear that improvement is still needed. Human judgment will always be important because of ethical, transparency and accountability issues. Although the nature of the work of a journalist might change in the future, I do not think that journalists have to be afraid to be taken over completely by news bots.