Call our IT experts today on 01254 email@example.com
Lancs BB1 4LA
Tighter Regulation Says Government
Prime Minister Theresa May and Home Secretary Amber Rudd have suggested that a lack of regulation, social media platforms not doing enough, and the fact that encryption means that there are some messages that law enforcement are unable to read have created what amount to creating a safe place on the Internet for terrorists to spread their ideology, recruit, communicate and plan attacks.
The Encryption Argument
Devices and some apps such WhatsApp allow users to encrypt (scramble) messages when they are sent and allow only the intended recipient’s device / app to un-encrypt them.
Although, in a time when there are very high levels of cyber crime, encryption provides a valuable layer of security e.g. personal data and bank details, politicians now appear to be arguing that there should be less use of encryption, disabling of automatic encryption on popular apps, and more ‘back doors’ for the authorities to use in popular apps.
Tech and online security commentators argue, however, that although this may close some of the current avenues for terrorists, it would leave ordinary Internet users more open to attack by cyber criminals. It would also not stop terrorists from encrypting messages manually, encrypting them in code, or finding alternative, more underground methods of communication.
Could Social Media Giants Do More?
Government criticism has also focused on social media and video sharing platforms such as Facebook and YouTube allowing hate / terrorist / extremist content to be displayed, not finding and removing that content quickly enough and, therefore, not policing and censoring their own platforms.
The global popularity of sites like Google, YouTube (owned by Google), Facebook and Twitter means that they have large amounts content uploaded to them daily (400 hours of videos are uploaded to YouTube every minute, and 200,000 reports of inappropriate content a day).
Despite the significant challenge of identifying and removing suspect content, all sites report that terrorist content has no place on their platforms, and that they are investing and making significant efforts to stop it e.g. Facebook by a combination of technology and human review, and Google’s development of an international forum to fight abuse on its platform.
It has been suggested that the big tech companies may be erring on the side of privacy and not security, and that they could be spurred on to greater efforts to deal more effectively with problems like extremist content with the help of more pressure from shareholders and advertisers.
Online Freedom Campaigners
Online freedom campaigners such as The Open Rights Group have pointed out that in reality, attempts to control and censor the Internet could be very challenging and difficult to enforce. The ORG have also warned that governments should seek sensible solutions as more regulations could simply push terrorists into more difficult to reach areas of the Internet.
What Does This Mean For Your Business?
As commercial organisations, the big social media platforms clearly have at the very least an interest in protecting their reputations. At the same time, they are likely to be cautious about kneejerk reactions to situations, and resistant to measures that could restrict freedoms enjoyed by the vast majority of law-abiding users that have made the platforms so popular in the first place. There may also be some truth in the fact that it may be convenient for governments to blame tech companies and social media platforms for security failings.
Stopping or limiting encryption of messages, and building ‘back doors’ in popular devices and systems may sound helpful for governments trying to tackle extremists, but this could mean more security and cyber crime risks for the rest of us, and could lead to more cyber attacks on businesses.