Avira Scout: Plans and Tactics (Part 2) – Updated Jan. 2017
Update January 2017
It seems that I have been too optimistic concerning the Open Source idea. We’ve struggled a lot with the goal “compile and test whenever Chromium releases a new version to fix vulnerabilities and bugs ASAP” – and succeeded. But at the same time we’ve learned that the Chromium code is tremendous, scary, big. We crashed our internal version control systems with their default settings more than once. At the moment I am not entirely sure if it will be possible for is to achieve both of the below goals:
- Fix bugs ASAP and push a tested release to the customers
- Have a Open Source version available (including Reproducible Builds, …)
Our priorities have changed and as much as I love Open Source, fixing bugs & vulnerabilities has a bigger priority. Personally I still hope we will get Scout Open Source.
The first blog post ended halfway down the Rabbit Hole. If you haven’t done so yet you should definitely read it first.
I have been describing how we can get the (already very secure) browser even more secure. This blog posts continues the story.
The goal with the browser is to create an easy-to-use, secure and privacy respecting browser. These are the more advanced tactics we will be using:
Our Cloud DBs
Adding cloud features to file scanning was a large success. The detection quality of malicious files went straight up. Short:
On the client there is a behaviour detection kind of pre-selection. If a file is suspicious the cloud server is asked if the file is already known
- An upload is requested
- The file is uploaded to the server
- There we have several detection modules that cannot be deployed on the customers PCs (an AI with a large database, sandboxes for behavior classification, etc. ). They scan and classify the file
- The database is updated
- The results are sent back, you are protected
We built incredible databases covering malicious files during the last years. We should have something similar for the browser and use our large knowledge base and server side classification tools for web threats as well.
It should look something like that:
- The browser detects something strange (“behavior detection”), this is called pre-selection
- It asks the backend database if this is already known
- If not: relevant data (URL, file, …) is uploaded for inspection
- Our server based tool (and our analysts) will classify the upload and update our databases
- The result is sent back directly (within milliseconds. Yes, the tools are that fast. We will try to improve our analysts 😉 )
- You are protected
- We are improving our “evil parts of the internet” map.
To get there we will have to improve the signal-to-noise ratio. We are only interested in malicious pages. If the pre-selection in the browser is too aggressive and sends non-malicious pages to us, it‘s a waste of CPU cycles and bandwidth. With millions of users as a factor, even minor slips will be expensive and annoying for everyone involved.
We will also remove private data before sending it (we are not interested in user data. We are spying on malware). Personal data is actually toxic for us. Servers get hacked, databases stolen, companies gag-ordered. Not having that kind of data on our servers protects us as well as you. I mean just think of it: Some web pages have the user name in the URL (*/facepalm*). I do not think we can automatically detect and remove that trace of data though. But maybe we could shame the web pages into fixing it …*/think*
The parts in the source that collect the data and prepare them for sending are Open Source. Here I am asking you to NOT trust us and review the code! 🙂
I hope we find a simple solution to display the data being sent to us before sending. The only problem is that it could have a negative impact on your browsing experience. Having a modal dialog when you expect a page to load …
One option could be to at least offer a global configuration to switch cloud requests off (always, in incognito mode only, never) and show you in logs what got sent.
We are selling libraries and databases covering malicious files and web pages.
You want your own AV? Or protection technology in your Tetris game to make it unique? Just contact our SI department and make a deal.
Other companies have thousands of web-crawlers simulating user behavior to identify malware.
Millions of real Avira users are our scouts and sensors.
We need some branding. That would include Avira specific changes in the browser (names, logos, some other texts). But also links. This is not only relevant for brand-awareness but also to keep our users away from Chrome/Chromium support to avoid confusion (“Which Chrome version do you have ?” … listens … “we never released that, can you please click on “about and tell me the version number” … listen … “WTF?!?” => Confusion) and direct them to our support – who actually CAN help.
We will always improve the build process. There are compiler switches for features called Position Independent Executable (PIE), Fortify Source, etc. that we should enable on compilation (many are already enabled). Most time here will be spent on ensuring that they do not get disabled by accident, are enabled on all platforms, and do not slow down the browser. This task can start simple and suddenly spawn nasty side effects. This is why we need TestingTestingTesting.
Google added the Hotwords feature to Chromium and Chrome. It’s a nice feature. But it switches on the microphone and “spies” on the user (this is a convenience feature many users want). For our secure and privacy respecting browser this crossed a line though. This is the reason why we will have to verify that no “surprise !!!”-Extensions get installed by default. One more task for our testers that add verification tasks to the browser to handle our specific requirements. Keep in mind: Chrome and Chromium already have very good unit-tests and other automated test cases. We just need some extra paranoia. That’s the job for our testers in the team.
We will write blog posts covering all the features. The attacks they block, their weaknesses, what we did and will be doing to improve them. We will offer you a guided tour Down the Rabbit Hole. Go with us as far as you dare.
There is so much we can do to improve the browser; without touching the core.
We reached the bottom of this specific Rabbit Hole.