Google

Using Chrome’s accessibility APIs to find security bugs

Posted by Adrian Taylor, Security Engineer, Chrome













Chrome’s user interface (UI) code is complex, and sometimes has bugs.


Are those bugs security bugs? Specifically, if a user’s clicks and actions result in memory corruption, is that something that an attacker can exploit to harm that user?


Our security severity guidelines say “yes, sometimes.” For example, an attacker could very likely convince a user to click an autofill prompt, but it will be much harder to convince the user to step through a whole flow of different dialogs.


Even if these bugs aren’t the most easily exploitable, it takes a great deal of time for our security shepherds to make these determinations. User interface bugs are often flakey (that is, not reliably reproducible). Also, even if these bugs aren’t necessarily deemed to be exploitable, they may still be annoying crashes which bother the user.


It would be great if we could find these bugs automatically.


If only the whole tree of Chrome UI controls were exposed, somehow, such that we could enumerate and interact with each UI control automatically.


Aha! Chrome exposes all the UI controls to assistive technology. Chrome goes to great lengths to ensure its entire UI is exposed to screen readers, braille devices and other such assistive tech. This tree of controls includes all the toolbars, menus, and the structure of the page itself. This structural definition of the browser user interface is already sometimes used in other contexts, for example by some password managers, demonstrating that investing in accessibility has benefits for all users. We’re now taking that investment and leveraging it to find security bugs, too.


Specifically, we’re now “fuzzing” that accessibility tree - that is, interacting with the different UI controls semi-randomly to see if we can make things crash. This technique has a long pedigree.


Screen reader technology is a bit different on each platform, but on Linux the tree can be explored using Accerciser.



Screenshot of Accerciser showing the tree of UI controls in Chrome


All we have to do is explore the same tree of controls with a fuzzer. How hard can it be?



“We do this not because it is easy, but because we thought it would be easy” - Anon.


Actually we never thought this would be easy, and a few different bits of tech have had to fall into place to make this possible. Specifically,



There are lots of combinations of ways to interact with Chrome. Truly randomly clicking on UI controls probably won’t find bugs - we would like to leverage coverage-guided fuzzing to help the fuzzer select combinations of controls that seem to reach into new code within Chrome.

We need any such bugs to be genuine. We therefore need to fuzz the actual Chrome UI, or something very similar, rather than exercising parts of the code in an unrealistic unit-test-like context. That’s where our InProcessFuzzer framework comes into play - it runs fuzz cases within a Chrome browser_test; essentially a real version of Chrome.

But such browser_tests have a high startup cost. We need to amortize that cost over thousands of test cases by running a batch of them within each browser invocation. Centipede is designed to do that.

But each test case won’t be idempotent. Within a given invocation of the browser, the UI state may be successively modified by each test case. We intend to add concatenation to centipede to resolve this.

Chrome is a noisy environment with lots of timers, which may well confuse coverage-guided fuzzers. Gathering coverage for such a large binary is slow in itself. So, we don’t know if coverage-guided fuzzing will successfully explore the UI paths here.



All of these concerns are common to the other fuzzers which run in the browser_test context, most notably our new IPC fuzzer (blog posts to follow). But the UI fuzzer presented some specific challenges.


Finding UI bugs is only useful if they’re actionable. Ideally, that means:



Our fuzzing infrastructure gives a thorough set of diagnostics.

It can bisect to find when the bug was introduced and when it was fixed.

It can minimize complex test cases into the smallest possible reproducer.

The test case is descriptive and says which UI controls were used, so a human may be able to reproduce it.



These requirements together mean that the test cases should be stable across each Chrome version - if a given test case reproduces a bug with Chrome 125, hopefully it will do so in Chrome 124 and Chrome 126 (assuming the bug is present in both). Yet this is tricky, since Chrome UI controls are deeply nested and often anonymous.


Initially, the fuzzer picked controls simply based on their ordinal at each level of the tree (for instance “control 3 nested in control 5 nested in control 0”) but such test cases are unlikely to be stable as the Chrome UI evolves. Instead, we settled on an approach where the controls are named, when possible, and otherwise identified by a combination of role and ordinal. This yields test cases like this:


action {
path_to_control {
named {
name: "Test - Chromium"
}
}
path_to_control {
anonymous {
role: "panel"
}
}
path_to_control {
anonymous {
role: "panel"
}
}
path_to_control {
anonymous {
role: "panel"
}
}
path_to_control {
named {
name: "Bookmarks"
}
}
take_action {
action_id: 12
}
}


Fuzzers are unlikely to stumble across these control names by chance, even with the instrumentation applied to string comparisons. In fact, this by-name approach turned out to be only 20% as effective as picking controls by ordinal. To resolve this we added a custom mutator which is smart enough to put in place control names and roles which are known to exist. We randomly use this mutator or the standard libprotobuf-mutator in order to get the best of both worlds. This approach has proven to be about 80% as quick as the original ordinal-based mutator, while providing stable test cases.



Chart of code coverage achieved by minutes fuzzing with different strategies

So, does any of this work?


We don’t know yet! - and you can follow along as we find out. The fuzzer found a couple of potential bugs (currently access restricted) in the accessibility code itself but hasn’t yet explored far enough to discover bugs in Chrome’s fundamental UI. But, at the time of writing, this has only been running on our ClusterFuzz infrastructure for a few hours, and isn’t yet working on our coverage dashboard. If you’d like to follow along, keep an eye on our coverage dashboard as it expands to cover UI code.
Google

A new path for Kyber on the web

Posted by David Adrian, David Benjamin, Bob Beck & Devon O'Brien, Chrome Team


We previously posted about experimenting with a hybrid post-quantum key exchange, and enabling it for 100% of Chrome Desktop clients. The hybrid key exchange used both the ...
Google

Sustaining Digital Certificate Security – Entrust Certificate Distrust

Posted by Chrome Root Program, Chrome Security Team


Update (09/10/2024): In support of more closely aligning Chrome’s planned compliance action with a major release milestone (i.e., M131), blocking action will now begin on November 12, 2024. This post has been updated to reflect the date change. Website operators who will be impacted by the upcoming change can explore continuity options offered by Entrust. Entrust has expressed its commitment to continuing to support customer needs, and is best positioned to describe the available options for website operators. Learn more at Entrust’s TLS Certificate Information Center.





The Chrome Security Team prioritizes the security and privacy of Chrome’s users, and we are unwilling to compromise on these values.


The Chrome Root Program Policy states that CA certificates included in the Chrome Root Store must provide value to Chrome end users that exceeds the risk of their continued inclusion. It also describes many of the factors we consider significant when CA Owners disclose and respond to incidents. When things don’t go right, we expect CA Owners to commit to meaningful and demonstrable change resulting in evidenced continuous improvement.


Over the past several years, publicly disclosed incident reports highlighted a pattern of concerning behaviors by Entrust that fall short of the above expectations, and has eroded confidence in their competence, reliability, and integrity as a publicly-trusted CA Owner.


In response to the above concerns and to preserve the integrity of the Web PKI ecosystem, Chrome will take the following actions.


Upcoming change in Chrome 131 and higher:



TLS server authentication certificates validating to the following Entrust roots whose earliest Signed Certificate Timestamp (SCT) is dated after November 11, 2024 (11:59:59 PM UTC), will no longer be trusted by default.


CN=Entrust Root Certification Authority - EC1,OU=See www.entrust.net/legal-terms+OU=(c) 2012 Entrust, Inc. - for authorized use only,O=Entrust, Inc.,C=US

CN=Entrust Root Certification Authority - G2,OU=See www.entrust.net/legal-terms+OU=(c) 2009 Entrust, Inc. - for authorized use only,O=Entrust, Inc.,C=US

CN=Entrust.net Certification Authority (2048),OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)+OU=(c) 1999 Entrust.net Limited,O=Entrust.net

CN=Entrust Root Certification Authority,OU=www.entrust.net/CPS is incorporated by reference+OU=(c) 2006 Entrust, Inc.,O=Entrust, Inc.,C=US

CN=Entrust Root Certification Authority - G4,OU=See www.entrust.net/legal-terms+OU=(c) 2015 Entrust, Inc. - for authorized use only,O=Entrust, Inc.,C=US

CN=AffirmTrust Commercial,O=AffirmTrust,C=US

CN=AffirmTrust Networking,O=AffirmTrust,C=US

CN=AffirmTrust Premium,O=AffirmTrust,C=US

CN=AffirmTrust Premium ECC,O=AffirmTrust,C=US
TLS server authentication certificates validating to the above set of roots whose earliest SCT is on or before November 11, 2024 (11:59:59 PM UTC), will be unaffected by this change.This approach attempts to minimize disruption to existing subscribers using a recently announced Chrome feature to remove default trust based on the SCTs in certificates.


Additionally, should a Chrome user or enterprise explicitly trust any of the above certificates on a platform and version of Chrome relying on the Chrome Root Store (e.g., explicit trust is conveyed through a Group Policy Object on Windows), the SCT-based constraints described above will be overridden and certificates will function as they do today.


To further minimize risk of disruption, website operators are encouraged to review the “Frequently Asked Questions" listed below.

Why is Chrome taking action?



Certification Authorities (CAs) serve a privileged and trusted role on the Internet that underpin encrypted connections between browsers and websites. With this tremendous responsibility comes an expectation of adhering to reasonable and consensus-driven security and compliance expectations, including those defined by the CA/Browser TLS Baseline Requirements.


Over the past six years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports. When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the Internet ecosystem, it is our opinion that Chrome’s continued trust in Entrust is no longer justified.

When will this action happen?



Blocking action will begin on approximately November 12, 2024, affecting certificates issued at that point or later.


Blocking action will occur in Versions of Chrome 131 and greater on Windows, macOS, ChromeOS, Android, and Linux. Apple policies prevent the Chrome Certificate Verifier and corresponding Chrome Root Store from being used on Chrome for iOS.

What is the user impact of this action?



By default, Chrome users in the above populations who navigate to a website serving a certificate issued by Entrust or AffirmTrust after November 11, 2024 (11:59:59 PM UTC) will see a full page interstitial similar to this one.


Certificates issued by other CAs are not impacted by this action.

How can a website operator tell if their website is affected?



Website operators can determine if they are affected by this issue by using the Chrome Certificate Viewer.


Use the Chrome Certificate Viewer



Navigate to a website (e.g., https://www.google.com)

Click the “Tune" icon

Click “Connection is Secure"

Click “Certificate is Valid" (the Chrome Certificate Viewer will open)


Website owner action is not required, if the “Organization (O)” field listed beneath the “Issued By" heading does not contain “Entrust" or “AffirmTrust”.

Website owner action is required, if the “Organization (O)” field listed beneath the “Issued By" heading contains “Entrust" or “AffirmTrust”.




What does an affected website operator do?



We recommend that affected website operators transition to a new publicly-trusted CA Owner as soon as reasonably possible. To avoid adverse website user impact, action must be completed before the existing certificate(s) expire if expiry is planned to take place after November 11, 2024 (11:59:59 PM UTC).


While website operators could delay the impact of blocking action by choosing to collect and install a new TLS certificate issued from Entrust before Chrome’s blocking action begins on November 12, 2024, website operators will inevitably need to collect and install a new TLS certificate from one of the many other CAs included in the Chrome Root Store.

Can I test these changes before they take effect?



Yes.


A command-line flag was added beginning in Chrome 128 (available in Canary/Dev at the time of this post’s publication) that allows administrators and power users to simulate the effect of an SCTNotAfter distrust constraint as described in this blog post FAQ.


How to: Simulate an SCTNotAfter distrust1. Close all open versions of Chrome2. Start Chrome using the following command-line flag, substituting variables described below with actual values


--test-crs-constraints=$[Comma Separated List of Trust Anchor Certificate SHA256 Hashes]:sctnotafter=$[epoch_timestamp]


3. Evaluate the effects of the flag with test websites Example: The following command will simulate an SCTNotAfter distrust with an effective date of April 30, 2024 11:59:59 PM GMT for all of the Entrust trust anchors included in the Chrome Root Store. The expected behavior is that any website whose certificate is issued before the enforcement date/timestamp will function in Chrome, and all issued after will display an interstitial.

--test-crs-constraints=02ED0EB28C14DA45165C566791700D6451D7FB56F0B2AB1D3B8EB070E56EDFF5,
43DF5774B03E7FEF5FE40D931A7BEDF1BB2E6B42738C4E6D3841103D3AA7F339,
6DC47172E01CBCB0BF62580D895FE2B8AC9AD4F873801E0C10B9C837D21EB177,
73C176434F1BC6D5ADF45B0E76E727287C8DE57616C1E6E6141A2B2CBC7D8E4C,
DB3517D1F6732A2D5AB97C533EC70779EE3270A62FB4AC4238372460E6F01E88,
0376AB1D54C5F9803CE4B2E201A0EE7EEF7B57B636E8A93C9B8D4860C96F5FA7,
0A81EC5A929777F145904AF38D5D509F66B5E2C58FCDB531058B0E17F3F0B41B,
70A73F7F376B60074248904534B11482D5BF0E698ECC498DF52577EBF2E93B9A,
BD71FDF6DA97E4CF62D1647ADD2581B07D79ADF8397EB4ECBA9C5E8488821423
:sctnotafter=1714521599

Illustrative Command (on Windows):

"C:\Users\User123\AppData\Local\Google\Chrome SxS\Application\chrome.exe" --test-crs-constraints=02ED0EB28C14DA45165C566791700D6451D7FB56F0B2AB1D3B8EB070E56EDFF5,43DF5774B03E7FEF5FE40D931A7BEDF1BB2E6B42738C4E6D3841103D3AA7F339,6DC47172E01CBCB0BF62580D895FE2B8AC9AD4F873801E0C10B9C837D21EB177,73C176434F1BC6D5ADF45B0E76E727287C8DE57616C1E6E6141A2B2CBC7D8E4C,DB3517D1F6732A2D5AB97C533EC70779EE3270A62FB4AC4238372460E6F01E88,0376AB1D54C5F9803CE4B2E201A0EE7EEF7B57B636E8A93C9B8D4860C96F5FA7,0A81EC5A929777F145904AF38D5D509F66B5E2C58FCDB531058B0E17F3F0B41B,70A73F7F376B60074248904534B11482D5BF0E698ECC498DF52577EBF2E93B9A,BD71FDF6DA97E4CF62D1647ADD2581B07D79ADF8397EB4ECBA9C5E8488821423:sctnotafter=1714521599
Illustrative Command (on macOS):

"/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary" --test-crs-constraints=02ED0EB28C14DA45165C566791700D6451D7FB56F0B2AB1D3B8EB070E56EDFF5,43DF5774B03E7FEF5FE40D931A7BEDF1BB2E6B42738C4E6D3841103D3AA7F339,6DC47172E01CBCB0BF62580D895FE2B8AC9AD4F873801E0C10B9C837D21EB177,73C176434F1BC6D5ADF45B0E76E727287C8DE57616C1E6E6141A2B2CBC7D8E4C,DB3517D1F6732A2D5AB97C533EC70779EE3270A62FB4AC4238372460E6F01E88,0376AB1D54C5F9803CE4B2E201A0EE7EEF7B57B636E8A93C9B8D4860C96F5FA7,0A81EC5A929777F145904AF38D5D509F66B5E2C58FCDB531058B0E17F3F0B41B,70A73F7F376B60074248904534B11482D5BF0E698ECC498DF52577EBF2E93B9A,BD71FDF6DA97E4CF62D1647ADD2581B07D79ADF8397EB4ECBA9C5E8488821423:sctnotafter=1714521599
Note: If copy and pasting the above commands, ensure no line-breaks are introduced.

Learn more about command-line flags here.
I use Entrust certificates for my internal enterprise network, do I need to do anything?
Beginning in Chrome 127, enterprises can override Chrome Root Store constraints like those described for Entrust in this blog post by installing the corresponding root CA certificate as a locally-trusted root on the platform Chrome is running (e.g., installed in the Microsoft Certificate Store as a Trusted Root CA).
How do enterprises add a CA as locally-trusted?
Customer organizations should defer to platform provider guidance.
What about other Google products?
Other Google product team updates may be made available in the future.
Google

Staying Safe with Chrome Extensions

Posted by Benjamin Ackerman, Anunoy Ghosh and David Warren, Chrome Security Team







Chrome extensions can boost your browsing, empowering you to do anything from customizing the look of sites to providing personalized advice when you’re planning a vacation. But as with any software, extensions can also introduce risk.


That’s why we have a team whose only job is to focus on keeping you safe as you install and take advantage of Chrome extensions. Our team:




Provides you with a personalized summary of the extensions you’ve installed

Reviews extensions before they’re published on the Chrome Web Store

Continuously monitors extensions after they’re published


A summary of your extensions



The top of the extensions page (chrome://extensions) warns you of any extensions you have installed that might pose a security risk. (If you don’t see a warning panel, you probably don’t have any extensions you need to worry about.) The panel includes:



Extensions suspected of including malware

Extensions that violate Chrome Web Store policies

Extensions that have been unpublished by a developer, which might indicate that an extension is no longer supported

Extensions that aren’t from the Chrome Web Store

Extensions that haven’t published what they do with data they collect and other privacy practices



You’ll get notified when Chrome’s Safety Check has recommendations for you or you can check on your own by running Safety Check. Just type “run safety check” in Chrome’s address bar and select the corresponding shortcut: “Go to Chrome safety check.”


User flow of removing extensions highlighted by Safety Check.


Besides the Safety Check, you can visit the extensions page directly in a number of ways:



Navigate to chrome://extensions

Click the puzzle icon and choose “Manage extensions”

Click the More choices menu and choose menu > Extensions > Manage Extensions


Reviewing extensions before they’re published



Before an extension is even accessible to install from the Chrome Web Store, we have two levels of verification to ensure an extension is safe:



An automated review: Each extension gets examined by our machine-learning systems to spot possible violations or suspicious behavior.

A human review: Next, a team member examines the images, descriptions, and public policies of each extension. Depending on the results of both the automated and manual review, we may perform an even deeper and more thorough review of the code.



This review process weeds out the overwhelming majority of bad extensions before they even get published. In 2024, less than 1% of all installs from the Chrome Web Store were found to include malware. We're proud of this record and yet some bad extensions still get through, which is why we also monitor published extensions.

Monitoring published extensions



The same Chrome team that reviews extensions before they get published also reviews extensions that are already on the Chrome Web Store. And just like the pre-check, this monitoring includes both human and machine reviews. We also work closely with trusted security researchers outside of Google, and even pay researchers who report possible threats to Chrome users through our Developer Data Protection Rewards Program.


What about extensions that get updated over time, or are programmed to execute malicious code at a later date? Our systems monitor for that as well, by periodically reviewing what extensions are actually doing and comparing that to the stated objectives defined by each extension in the Chrome Web Store.


If the team finds that an extension poses a severe risk to Chrome users, it’s immediately remove from the Chrome Web Store and the extension gets disabled on all browsers that have it installed.The extensions page highlights when you have a potentially unsafe extension downloaded




Others steps you can take to stay safe




Review new extensions before installing them



The Chrome Web Store provides useful information about each extension and its developer. The following information should help you decide whether it’s safe to install an extension:



Verified and featured badges are awarded by the Chrome team to extensions that follow our technical best practices and meet a high standard of user experience and design

Ratings and reviews from our users

Information about the developer

Privacy practices, including information about how an extension handles your data



Be careful of sites that try to quickly persuade you to install extensions, especially if the site has little in common with the extension.

Review extensions you’ve already installed



Even though Safety Check and your Extensions page (chrome://extensions) warn you of extensions that might pose a risk, it’s still a good idea to review your extensions from time to time.



Uninstall extensions that you no longer use.

Review the description of an extension in the Chrome Web Store, considering the extension’s ratings, reviews, and privacy practices — reviews can change over time.

Compare an extension’s stated goals with 1) the permissions requested by an extension and 2) the privacy practices published by the extension. If requested permissions don’t align with stated goals, consider uninstalling the extension.

Limit the sites an extension has permission to work on.


Enable Enhanced Protection



The Enhanced protection mode of Safe Browsing is Chrome’s highest level of protection that we offer. Not only does this mode provide you with the best protections against phishing and malware, but it also provides additional features targeted to keep you safe against potentially harmful extensions. Threats are constantly evolving and Safe Browsing’s Enhanced protection mode is the best way to ensure that you have the most advanced security features in Chrome. This can be enabled from the Safe Browsing settings page in Chrome (chrome://settings/security) and selecting “Enhanced”.

Google

Detecting browser data theft using Windows Event Logs

Posted by Will Harris, Chrome Security Team











Chromium's sandboxed process model defends well from malicious web content, but there are limits to how well the application can protect itself from malware already on the computer. Cookies and other credentials remain a high value target for attackers, and we are trying to tackle this ongoing threat in multiple ways, including working on web standards like
DBSC
that will help disrupt the cookie theft industry since exfiltrating these cookies will no longer have any value.
Where it is not possible to prevent the theft of credentials and cookies by malware, the next best thing is making the attack more observable by antivirus, endpoint detection agents, or enterprise administrators with basic log analysis tools.
This blog describes one set of signals for use by system administrators or endpoint detection agents that should reliably flag any access to the browser’s protected data from another application on the system. By increasing the likelihood of an attack being detected, this changes the calculus for those attackers who might have a strong desire to remain stealthy, and might cause them to rethink carrying out these types of attacks against our users.

Background

Chromium based browsers on Windows use the DPAPI (Data Protection API) to secure local secrets such as cookies, password etc. against theft. DPAPI protection is based on a key derived from the user's login credential and is designed to protect against unauthorized access to secrets from other users on the system, or when the system is powered off. Because the DPAPI secret is bound to the logged in user, it cannot protect against local malware attacks — malware executing as the user or at a higher privilege level can just call the same APIs as the browser to obtain the DPAPI secret.
Since 2013, Chromium has been applying the CRYPTPROTECT_AUDIT flag to DPAPI calls to request that an audit log be generated when decryption occurs, as well as tagging the data as being owned by the browser. Because all of Chromium's encrypted data storage is backed by a DPAPI-secured key, any application that wishes to decrypt this data, including malware, should always reliably generate a clearly observable event log, which can be used to detect these types of attacks.
There are three main steps involved in taking advantage of this log:

Enable logging on the computer running Google Chrome, or any other Chromium based browser.
Export the event logs to your backend system.
Create detection logic to detect theft.

This blog will also show how the logging works in practice by testing it against a python password stealer.

Step 1: Enable logging on the system

DPAPI events are logged into two places in the system. Firstly, there is the
4693 event that can be logged into the Security Log. This event can be enabled by turning on "Audit DPAPI Activity" and the steps to do this are described
here, the policy itself sits deep within Security Settings -> Advanced Audit Policy Configuration -> Detailed Tracking.
Here is what the 4693 event looks like:

<Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; <Provider Name&equals;"Microsoft-Windows-Security-Auditing" Guid&equals;"&lcub;&period;&period;&period;&rcub;" &sol;>&NewLine; <EventID>4693<&sol;EventID>&NewLine; <Version>0<&sol;Version>&NewLine; <Level>0<&sol;Level>&NewLine; <Task>13314<&sol;Task>&NewLine; <Opcode>0<&sol;Opcode>&NewLine; <Keywords>0x8020000000000000<&sol;Keywords>&NewLine; <TimeCreated SystemTime&equals;"2015-08-22T06&colon;25&colon;14&period;589407700Z" &sol;>&NewLine; <EventRecordID>175809<&sol;EventRecordID>&NewLine; <Correlation &sol;>&NewLine; <Execution ProcessID&equals;"520" ThreadID&equals;"1340" &sol;>&NewLine; <Channel>Security<&sol;Channel>&NewLine; <Computer>DC01&period;contoso&period;local<&sol;Computer>&NewLine; <Security &sol;>&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"SubjectUserSid">S-1-5-21-3457937927-2839227994-823803824-1104<&sol;Data>&NewLine; <Data Name&equals;"SubjectUserName">dadmin<&sol;Data>&NewLine; <Data Name&equals;"SubjectDomainName">CONTOSO<&sol;Data>&NewLine; <Data Name&equals;"SubjectLogonId">0x30d7c<&sol;Data>&NewLine; <Data Name&equals;"MasterKeyId">0445c766-75f0-4de7-82ad-d9d97aad59f6<&sol;Data>&NewLine; <Data Name&equals;"RecoveryReason">0x5c005c<&sol;Data>&NewLine; <Data Name&equals;"RecoveryServer">DC01&period;contoso&period;local<&sol;Data>&NewLine; <Data Name&equals;"RecoveryKeyId" &sol;>&NewLine; <Data Name&equals;"FailureId">0x380000<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event>

The issue with the 4693 event is that while it is generated if there is DPAPI activity on the system, it unfortunately does not contain information about which process was performing the DPAPI activity, nor does it contain information about which particular secret is being accessed. This is because the
Execution ProcessID
field in the event will always be the process id of lsass.exe because it is this process that manages the encryption keys for the system, and there is no entry for the description of the data.
It was for this reason that, in recent versions of Windows a new event type was added to help identify the process making the DPAPI call directly. This event was added to the
Microsoft-Windows-Crypto-DPAPI
stream which manifests in the Event Log in the Applications and Services Logs > Microsoft > Windows > Crypto-DPAPI part of the Event Viewer tree.
The new event is called
DPAPIDefInformationEvent
and has id 16385, but unfortunately is only emitted to the Debug channel and by default this is not persisted to an Event Log, unless Debug channel logging is enabled. This can be accomplished by enabling it directly in powershell:

&dollar;log &equals; &grave;&NewLine; New-Object System&period;Diagnostics&period;Eventing&period;Reader&period;EventLogConfiguration &grave;&NewLine; Microsoft-Windows-Crypto-DPAPI&sol;Debug&NewLine;&dollar;log&period;IsEnabled &equals; &dollar;True&NewLine;&dollar;log&period;SaveChanges&lpar;&rpar;&NewLine;

Once this log is enabled then you should start to see 16385 events generated, and these will contain the real process ids of applications performing DPAPI operations. Note that 16385 events are emitted by the operating system even for data not flagged with CRYPTPROTECT_AUDIT, but to identify the data as owned by the browser, the data description is essential. 16385 events are described later.
You will also want to enable
Audit Process Creation in order to be able to know a current mapping of process ids to process names — more details on that later. You might want to also consider enabling logging of
full command lines.

Step 2: Collect the events

The events you want to collect are:

From Security log:


4688: "A new process was created."


From Microsoft-Windows-Crypto-DPAPI/Debug log: (enabled above)

16385: "DPAPIDefInformationEvent"


These should be collected from all workstations, and persisted into your enterprise logging system for analysis.

Step 3: Write detection logic to detect theft.

With these two events is it now possible to detect when an unauthorized application calls into DPAPI to try and decrypt browser secrets.
The general approach is to generate a map of process ids to active processes using the 4688 events, then every time a 16385 event is generated, it is possible to identify the currently running process, and alert if the process does not match an authorized application such as Google Chrome. You might find your enterprise logging software can already keep track of which process ids map to which process names, so feel free to just use that existing functionality.
Let's dive deeper into the events.
A 4688 event looks like this - e.g. here is Chrome browser launching from explorer:

<Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; <Provider Name&equals;"Microsoft-Windows-Security-Auditing" Guid&equals;"&lcub;...&rcub;" &sol;>&NewLine; <EventID>4688<&sol;EventID>&NewLine; <Version>2<&sol;Version>&NewLine; <Level>0<&sol;Level>&NewLine; <Task>13312<&sol;Task>&NewLine; <Opcode>0<&sol;Opcode>&NewLine; <Keywords>0x8020000000000000<&sol;Keywords>&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;06&colon;41&period;9254105Z" &sol;>&NewLine; <EventRecordID>78258343<&sol;EventRecordID>&NewLine; <Correlation &sol;>&NewLine; <Execution ProcessID&equals;"4" ThreadID&equals;"54256" &sol;>&NewLine; <Channel>Security<&sol;Channel>&NewLine; <Computer>WIN-GG82ULGC9GO&period;contoso&period;local<&sol;Computer>&NewLine; <Security &sol;>&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"SubjectUserSid">S-1-5-18<&sol;Data>&NewLine; <Data Name&equals;"SubjectUserName">WIN-GG82ULGC9GO&dollar;<&sol;Data>&NewLine; <Data Name&equals;"SubjectDomainName">CONTOSO<&sol;Data>&NewLine; <Data Name&equals;"SubjectLogonId">0xe8c85cc<&sol;Data>&NewLine; <Data Name&equals;"NewProcessId">0x17eac<&sol;Data>&NewLine; <Data Name&equals;"NewProcessName">C&colon;&bsol;Program Files&bsol;Google&bsol;Chrome&bsol;Application&bsol;chrome&period;exe<&sol;Data>&NewLine; <Data Name&equals;"TokenElevationType">&percnt;&percnt;1938<&sol;Data>&NewLine; <Data Name&equals;"ProcessId">0x16d8<&sol;Data>&NewLine; <Data Name&equals;"CommandLine">"C&colon;&bsol;Program Files&bsol;Google&bsol;Chrome&bsol;Application&bsol;chrome&period;exe" <&sol;Data>&NewLine; <Data Name&equals;"TargetUserSid">S-1-0-0<&sol;Data>&NewLine; <Data Name&equals;"TargetUserName">-<&sol;Data>&NewLine; <Data Name&equals;"TargetDomainName">-<&sol;Data>&NewLine; <Data Name&equals;"TargetLogonId">0x0<&sol;Data>&NewLine; <Data Name&equals;"ParentProcessName">C&colon;&bsol;Windows&bsol;explorer&period;exe<&sol;Data>&NewLine; <Data Name&equals;"MandatoryLabel">S-1-16-8192<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event>&NewLine;

The important part here is the
NewProcessId, in hex
0x17eac
which is
97964.
A 16385 event looks like this:

<Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; <Provider Name&equals;"Microsoft-Windows-Crypto-DPAPI" Guid&equals;"&lcub;...&rcub;" &sol;>&NewLine; <EventID>16385<&sol;EventID>&NewLine; <Version>0<&sol;Version>&NewLine; <Level>4<&sol;Level>&NewLine; <Task>64<&sol;Task>&NewLine; <Opcode>0<&sol;Opcode>&NewLine; <Keywords>0x2000000000000040<&sol;Keywords>&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;06&colon;42&period;1772585Z" &sol;>&NewLine; <EventRecordID>826993<&sol;EventRecordID>&NewLine; <Correlation ActivityID&equals;"&lcub;777bf68d-7757-0028-b5f6-7b775777da01&rcub;" &sol;>&NewLine; <Execution ProcessID&equals;"1392" ThreadID&equals;"57108" &sol;>&NewLine; <Channel>Microsoft-Windows-Crypto-DPAPI&sol;Debug<&sol;Channel>&NewLine; <Computer>WIN-GG82ULGC9GO&period;contoso&period;local<&sol;Computer>&NewLine; <Security UserID&equals;"S-1-5-18" &sol;>&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"OperationType">SPCryptUnprotect<&sol;Data>&NewLine; <Data Name&equals;"DataDescription">Google Chrome<&sol;Data>&NewLine; <Data Name&equals;"MasterKeyGUID">&lcub;4df0861b-07ea-49f4-9a09-1d66fd1131c3&rcub;<&sol;Data>&NewLine; <Data Name&equals;"Flags">0<&sol;Data>&NewLine; <Data Name&equals;"ProtectionFlags">16<&sol;Data>&NewLine; <Data Name&equals;"ReturnValue">0<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessStartKey">32651097299526713<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessID">97964<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessCreationTime">133561300019253302<&sol;Data>&NewLine; <Data Name&equals;"PlainTextDataSize">32<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event>&NewLine;

The important parts here are the
OperationType, the
DataDescription
and the
CallerProcessID.
For DPAPI decrypts, the
OperationType
will be SPCryptUnprotect.
Each Chromium based browser will tag its data with the product name, e.g. Google Chrome, or Microsoft Edge depending on the owner of the data. This will always appear in the
DataDescription
field, so it is possible to distinguish browser data from other DPAPI secured data.
Finally, the
CallerProcessID
will map to the process performing the decryption. In this case, it is 97964 which matches the process ID seen in the 4688 event above, showing that this was likely Google Chrome decrypting its own data! Bear in mind that since these logs only contain the path to the executable, for a full assurance that this is actually Chrome (and not malware pretending to be Chrome, or malware injecting into Chrome), additional protections such as removing administrator access, and application allowlisting could also be used to give a higher assurance of this signal. In recent versions of Chrome or Edge, you might also see logs of decryptions happening in the elevation_service.exe process, which is another legitimate part of the browser's data storage.
To detect unauthorized DPAPI access, you will want to generate a running map of all processes using 4688 events, then look for 16385 events that have a CallerProcessID that does not match a valid caller – Let's try that now.

Testing with a python password stealer

We can test that this works with a public script to decrypt passwords taken from
a public blog. It generates two events, as expected:
Here is the 16385 event, showing that a process is decrypting the "Google Chrome" key.

<Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; < &period;&period;&period; >&NewLine; <EventID>16385<&sol;EventID>&NewLine; < &period;&period;&period; >&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;28&colon;13&period;7891561Z" &sol;>&NewLine; < &period;&period;&period; >&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"OperationType">SPCryptUnprotect<&sol;Data>&NewLine; <Data Name&equals;"DataDescription">Google Chrome<&sol;Data>&NewLine; < &period;&period;&period; >&NewLine; <Data Name&equals;"CallerProcessID">68768<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessCreationTime">133561312936527018<&sol;Data>&NewLine; <Data Name&equals;"PlainTextDataSize">32<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event>

Since the data description being decrypted was "Google Chrome" we know this is an attempt to read Chrome secrets, but to determine the process behind 68768 (0x10ca0), we need to correlate this with a 4688 event.
Here is the corresponding 4688 event from the Security Log (a process start for python3.exe) with the matching process id:

<Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; < &period;&period;&period; >&NewLine; <EventID>4688<&sol;EventID>&NewLine; < &period;&period;&period; >&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;28&colon;13&period;6527871Z" &sol;>&NewLine; < &period;&period;&period; >&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; < &period;&period;&period; >&NewLine; <Data Name&equals;"NewProcessId">0x10ca0<&sol;Data>&NewLine; <Data Name&equals;"NewProcessName">C&colon;&bsol;python3&bsol;bin&bsol;python3&period;exe<&sol;Data>&NewLine; <Data Name&equals;"TokenElevationType">&percnt;&percnt;1938<&sol;Data>&NewLine; <Data Name&equals;"ProcessId">0xca58<&sol;Data>&NewLine; <Data Name&equals;"CommandLine">"c&colon;&bsol;python3&bsol;bin&bsol;python3&period;exe" steal&lowbar;passwords&period;py<&sol;Data>&NewLine; < &period;&period;&period; >&NewLine; <Data Name&equals;"ParentProcessName">C&colon;&bsol;Windows&bsol;System32&bsol;cmd&period;exe<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event>

In this case, the process id matches the python3 executable running a potentially malicious script, so we know this is likely very suspicious behavior, and should trigger an alert immediately! Bear in mind process ids on Windows are not unique so you will want to make sure you use the 4688 event with the timestamp closest, but earlier than, the 16385 event.

Summary

This blog has described a technique for strong detection of cookie and credential theft. We hope that all defenders find this post useful. Thanks to Microsoft for adding the DPAPIDefInformationEvent log type, without which this would not be possible.

Google

Real-time, privacy-preserving URL protection

Posted by Jasika Bawa, Xinghui Lu, Google Chrome Security & Jonathan Li, Alex Wozniak, Google Safe Browsing


For more than 15 years, Google Safe Browsing has been protecting users from phishing, malware, unwanted software and more, by identifying ...
Google