locked
Running untrusted code in a sandbox RRS feed

  • Question

  • Hello! 

    According to https://msdn.microsoft.com/en-us/library/bb763046(v=vs.110).aspx, running code with unknown origin without putting additional security measures in place is not recommended. What are the additional security measures? 

    Lets say that you have an application (in .net) where you have an add-in/plugin-based system. These can be written by third-parties and you want to restrict what the add-ins actually can do.  What is the recommended way of doing this? Previously this would probably be "solved" by CAS and AppDomains, but according to the article, I understand that this is not recommended. Any tips?

    --

    HansO  

    Thursday, May 19, 2016 9:56 PM

Answers

  • Hi,

    in my eyes, it does not provide much protection. But any additional security might be something that stops an attack. But I remember discussions where people went into details and if I remember everything correctly, the result of most discussions was, that it is not a good solution to protect yourself.

    But I only spend time on CAS when I trained myself for the 70-536 certification which also included this topic. And when that certification was active, there was some discussions on this topic inside the forums.

    So in my eyes it can add some security but I do not see it as a solution for isolation or as a security boundary. But that is my personal opinion only.

    With kind regards,

    Konrad

    • Proposed as answer by Kristin Xie Friday, May 27, 2016 9:05 AM
    • Marked as answer by DotNet Wang Monday, May 30, 2016 9:33 AM
    Friday, May 20, 2016 11:57 AM

All replies

  • Hi,

    The important rule is in my eyes: Never run untrusted code / code of untrusted sources on trusted systems. There is no security. Even if something should be secure: There might be security problems that might be used to get elevated rights, break out of a sandbox, ...

    That is from my understanding the reason why Microsoft added the given point and reduced the possibilities of CAS (or even deprecated it?).

    There are tols which might be used to secure stuff. So you could have firewalls to restrict network access. You could run software under users with limited rights. You might use software that protects the system by hooking a lot of operations so that they can additionally check operations or limit the resources used. But all security actions are never good enough to protect your system. Malware could find a way to break out of your sandbox.

    Examples in the past even showed, that malware was able / ways was found to break out of the "sanbox" that a hypervisor was creating which means that one VM was able to access other VMs on a host. So even the "virtual computer" as sandbox might be a risk.

    So tools are really effective against (partial) trusted code. So I trust your code but I do not want it to go mad. So I use some software that limits the CPU usage of your application on my terminal servers. Or I party trust your code but I make sure that it does not change my system. So I let it run inside an app-v container (so all changes to the system are inside its container only. I really love app-v!)

    With kind regards,

    Konrad

    Friday, May 20, 2016 2:58 AM
  • Hello, Konrad.

    Thank you for replying.

    As the three golden principles of network security state:

    1) Don't buy a computer.

    2) If you must have a computer, dont turn it on.

    3) If you have to turn it on, don't connect it to a network.

    ou are giving trust to other applications based on some criteria/pre-screening and then run it isolated in App-V containers (which also probably can be evaded)? 

    For instance, the main app you are running each day is the web browser which runs untrusted code all the time. Chrome runs this in a sandboxed environment (separate process with restricted token, job and desktop objects - mechanisms provided by the operating system).

    I think there will always be a level of risk modelling and risk acceptance involved, and this is where the linked article fail. One definition of security is the absense of unacceptable risk. 

    If you have pre-screening of the vendor and application, then could appdomains be an acceptable solution for isolation (and as a security boundary) or would you say that AppDomains are not suitable at all for this?


    Friday, May 20, 2016 11:35 AM
  • Hi,

    in my eyes, it does not provide much protection. But any additional security might be something that stops an attack. But I remember discussions where people went into details and if I remember everything correctly, the result of most discussions was, that it is not a good solution to protect yourself.

    But I only spend time on CAS when I trained myself for the 70-536 certification which also included this topic. And when that certification was active, there was some discussions on this topic inside the forums.

    So in my eyes it can add some security but I do not see it as a solution for isolation or as a security boundary. But that is my personal opinion only.

    With kind regards,

    Konrad

    • Proposed as answer by Kristin Xie Friday, May 27, 2016 9:05 AM
    • Marked as answer by DotNet Wang Monday, May 30, 2016 9:33 AM
    Friday, May 20, 2016 11:57 AM
  • I wonder what this change in support policy means for CLR support in Microsoft SQL Server. In SQL Server 2016 and earlier, once the system administrator has enabled CLR support in the instance, database users with the CREATE ASSEMBLY permission can load CLR assemblies to SQL Server without needing additional instance-wide permissions, unless they explicitly request a more privileged permission set. Will SQL Server be able to sandbox these assemblies? If SQL Server can do that, why wouldn't IIS?

    I guess Microsoft has decided that security-safe-critical code is too difficult to audit. Can't blame them really. Consider the Buffer example in .NET Security Blog: it says the Buffer.Dispose(bool) method is not safe to call from multiple threads simultaneously, so trusted code that creates a Buffer instance should not let untrusted code access the instance if there is a chance of it using multiple threads… but the Buffer(int) constructor is also marked security-safe-critical, so I think untrusted code can just create its own Buffer instance and corrupt the unmanaged heap that way.

    In Windows Store apps, managed code runs in full trust as far as the CLR is concerned, but the entire process is sandboxed. I think this is the model Microsoft intends to support nowadays. It has the advantage that unmanaged high-performance code can be sandboxed too. Unfortunately, documentation of how to start this kind of sandbox is scarce. PROC_THREAD_ATTRIBUTE_SECURITY_CAPABILITIES looks relevant but I don't know what kind of privileges it requires. SECURITY_CAPABILITIES::AppContainerSid you'd presumably get from CreateAppContainerProfile or DeriveAppContainerSidFromAppContainerName.

    Monday, May 30, 2016 5:13 PM