Nearly half (46%) of the Internet’s top 1 million web sites, as ranked by Alexa, are risky, according to Menlo Security’s State of The Web 2016 report.
This is largely due to vulnerable software running on web servers and on underlying ad network domains. The results are significant because risky sites have never been easier to exploit, and traditional security products fail to provide adequate protection. Attackers have their veritable choice of half the web to exploit, allowing them to launch phishing attacks from legitimate sites.
Menlo Security considers a site risky if either the homepage or associated background sites: is running vulnerable software, is known-bad, or has had a security incident in the last 12 months. Vulnerable software was the leading factor in classifying a site as risky. Of the 1 million sites, 355,804 were either running vulnerable software or accessing background domains running vulnerable software; 166,853 fell into known bad categories; while 31,938 experienced a recent security incident.
Another key finding was that background requests sending content to web browsers outnumber user requests at a ratio of 25:1. The culprit sites found in the study include destinations that are widely unknown by name, however these large ad service networks are found hidden behind the world’s most visited media sites and include sites for 24-hour news, weather sites and major metro newspapers.
“Browsing the web is a leap into the unknown. We already knew that ad networks present risk to the public and businesses, but the extreme levels reached in 2016, affecting 46% of the most visited web sites, mean that enterprises must address the problem,” said KowsikGuruswamy, CTO at Menlo Security.
Risky Sites Have Never Been Easier to Exploit
Today, exploit kits are readily available to anyone, as are the instructional videos that provide step-by-step execution instructions. The expertise requirement has all but vanished. Underscoring this point, the average age of suspected cyber-attackers dropped from 24 to 17(1).
Traditional Security Products Fail to Provide Adequate Protection
Compounding the issue, the vast majority of malware prevention products attempt to prevent attacks by distinguishing between “good” and “bad” elements, and then implement policies intended to allow “good” content and block the “bad.” In every case, the detection is never perfect, and thus the policy choice involves a level of risk that the wrong decision is being made. Additionally, enterprises regularly allow access to popular websites for the sake of productivity. Given the risk associated with nearly half of popular sites, a Web security strategy based on categorization is effectively useless.
Phishing Attacks Can Now Utilize Legitimate Sites
Although traditional phishing attacks involve the creation of a new imposter, or “spoofed,” site, the sheer volume of vulnerable trusted sites makes it very easy for attackers to compromise a legitimate site and send that link as part of a phishing campaign. With this approach, attackers no longer need to worry that URL filtering will thwart their efforts, and they avoid anomalies in the link address, such as misspellings, special characters, or numbers that might raise suspicions. Clicking on what is a perfectly legitimate link within such a phishing email can expose a user to a drive-by malware exploit that could deliver ransomware, or mark the beginning of a larger breach.
“Menlo’s analysis confirms the Internet conundrum—use by businesses and consumers is essential but risky,” states Michael Suby, Stratecast VP of Research at Frost & Sullivan. “Furthermore, malware creators have historically demonstrated that they can evade detection techniques. While detection is important in reducing exposure, there is no guarantee of 100% detection. We believe that isolation, engaging the Internet at arm’s length, is an up-and-coming approach to reducing the malware risk inherent in Web browsing and click-able links in email.”