Understanding how secure a SecureString is RRS feed

  • General discussion

  • Hi,

    Evaluating the use of System.Security.SecureString to store data that is confidential. The Class description for SecureString on MSDN has remarks that include:

    "if a String object contains sensitive information such as a password, credit card number, or personal data, there is a risk the information could be revealed after it is used because your application cannot delete the data from computer memory."

    From this MSDN infer that the SecureString does offer sufficient security for the mentioned applications. Also a search for SecureString on this forum contains the following advise:

    "As far as in memory representation goes it is the same problem. If it is in memory then anybody can get to it. You should consider using SecureString for truly secure situations."

    This issue I want to discuss is the understanding of how secure a secure string is. This is not to a criticism of the SecureString itself - merely the understanding of the security it offers.

    Starting with the premise that at some moment in time data is "in the clear" on a PC, there is only a limited amount of protection that can be achieved. The vulnerabilities are that data can be found in cleartext in RAM, or that data can be found in cleartext in the page file. Storing data in an encrypted form does reduces the window of opportunity of finding it in cleartext, however there are still opportunities for disclosure.

    Data is added to a SecureString character by character. As an interface, this appears to allows the contents of the secure string to be built up char by char - and thus the consumer of the interface can consider that they are avoiding having the entire contents in the clear at the same time. The implemenation appears to be far from this - in that the entire buffer is decrypted (i.e. put into cleartext) before the next character is added and then the entire buffer is (re)encrypted. The shortcoming with this is that building a securestring will result in numerous points in time where part of the data is in the clear plus one time when all the data is in the clear (in managed memory) - while building the contents of the SecureString. AFAIK there is nothing that prevents the process being pre-empted while some or all data is in the clear with a possible result that cleartext contents could be written to the swap file. I am aware that there is a registry setting/group policy entry that specifies the swapfile to be cleared on shutdown, but if the shutdown is not managed due to a power outage etc. then it would not happen. There are additional techniques that allow other processes to access RAM that introduce vulnerabilities here.

    At the other end of the spectrum when extracting data using methods in the Marshal class, the cleartext data is copied out into unmanaged RAM. Unfortunately the RAM that the contents is copied to could be also swapped to disk in the time between the data being extracted from the SecureString and being zeroed out by the security concious developer, or the RAM accessed by another process. All the methods allocate the memory for the data so there is no ability to control it here.

    I have been looking to determine how to protect memory from being swapped to the page file, and the only options I have found are to use AWE APIs or implement a kernel level device driver to access the non-paged pool. Since the method in the Marshal class allocate the data anyway, the vulnerability remains.

    The SecureString is a definite improvement on the (immutable) String object - as MSDN says. Understanding both the security the SecureString offers as well as the vulnerabilities that remain is vital for developers so that they can determine how and when to use it.

    There are a number of things that the OS/.Net Framework need to offer to improve the environment for secure code development.

    First, being able to mark memory as non-swappable. I have read countless posts on the forum where developers have asked for this and the reply is 'just leave it to the OS'. Security of confidential data is one reason that it should be possible without encrypting the disk or page file. You can limit the amount of data that can be non-paged if you like, you can return error if the system cannot cope with the request - but developers need somewhere that they can keep data that will not ever, ever get to the the swap file.

    Second, being able to have a section of code that cannot be pre-empted. I am sure that will get the kernel developers up in arms - but before you all reply - consider this; If you give us the first request - we won't need the second (at least not yet).

    Finally, total security is not possible - it just has to be good enough.

    Any one care to comment?

    Friday, August 29, 2008 5:46 AM

All replies

  • SecureString is trivially cracked when somebody can locate the code that initializes it.  Locating the code requires effort, you can make that difficult.  Anything is crackable, it only becomes "secure" when the cost outweighs the gain. 

    Preventing memory from being written to the swap file would be an example of the cost of securing outweighing the benefit of the added security.  The odds that the page with the unprotected string would be swapped out are minuscule.  Pre-empting the thread at just the right time (+/- nanoseconds) and causing enough page faults to get it swapped out is so unlikely, it requires the attacker to dig through years, eons, of page file snapshots.

    Hans Passant.
    Friday, August 29, 2008 10:31 AM
  • While I have established that the SecureString is insufficiently secure for my purposes, I have to disagree with you opinion about the chance of data ending up on the disk.

    Actual tests reveal that much data that was never stored on disk by an application were in fact readily found on the disk - using forensic techniques. Investigation indicated that the source of the data was the page file.
    We are not referring to an off-chance occurrance, we are talking about multiple occurrances found on a machine that had been security erased and forensically checked prior to the tests. Sorry, "unlikely" just does not cut it. 

    Much data that needs to be kept confidential is recognisable in itself - and can be detected in a forensic sweep of the disk as such. I am not referring here to trivial security requirements here. Please refer to http://www.commoncriteriaportal.org/ or http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=40612 

    I am now beginning to wonder if the Windows platform is secure enough to process this information in cleartext. 
    Saturday, August 30, 2008 3:33 AM