General production questions: Debug info, Base Location, Manifest and code certificates RRS feed

  • Question

  • I have some general questions. With a new 64 bit production, I'm taking the opportunity the change a few things and explore others never really considered.   Hopefully other long time MS developers who want to come back into the MS fold have similar interest and can learn from this discussion.

    Some Background:

    For many years (since VS98), the release build of my product was done with debug information to help with any customer reports.  Starting with VS2015, I added  the a "Release_v140_xp" configuration.  Using Release PostBuild batch files for each project (over 125), I kept the binary images, Libs, Dlls, Exe and symbolic stuff, *.MAP, *.PDB, *.COD, the  *.ASM in LAN machine folders separated by versions and builds.  I rarely had an issue regarding customer GPF reports, etc, that wasn't traceable, in fact,  for a selected set of images,  I set the Base Location which greatly assisted in pinpointed the problem areas over years.

      wsmw.dll      0x08000000
      wsgate.dll    0x08100000
      wcsrv.dll     0x08000000
      wcsmw.dll     0x08100000
      wcsgate.dll   0x08200000
      wccore.dll    0x08300000
      wccomm.dll    0x08400000
      wcomodem.dll  0x08500000
      wcotelnt.dll  0x08600000
      wcoftp.dll    0x08700000
      wcguiagt.dll  0x08800000
      wcotapi.dll   0x08900000
      wcohttp.dll   0x08a00000
      wchttps.dll   0x08b00000
      wcsock32.dll  0x08c00000
      wcopop3.dll   0x08d00000
      mkwsrv.dll    0x08e00000
      mimelib.dll   0x08f00000  

    I also never used Manifest and Code certificates.

    Now, that I am in the 64 bit realm and also with non-xp compile consideration,  I have new considerations.

    Release Images Debugging Info and Base Locations:

    Question #1,  more of a confirmation, can I still use Base Locations, but use 64 bit considerations?  What are they? 

    I believe today I may not need them any more. For a number of years, the product was solid and stable and if there were any customer GPF reports, with their mini dump files and the debug info kept in the image, it has been sufficient to zoom in on the problem areas.   But I probably will remove them since they might be viewed as a security risk today with having "known memory" base locations for bad guys to target.  But keeping the debug information in the images, is that still good idea? Or is having the separate *.PDB available sufficient?  Does the user's WER setup need the PDB files to create the mini dumps?

    Manifest Files:

    I never used Manifest Files.  When they first came out, I  think with the VS2003/vs2005 versions, which I skipped because of the Manifest and DLL "From Hell" mess, MS was, in my opinion, "lost" in this era of manifest files and OS DLL mismatch related issues.  My understanding Manifest files helped with that and also with helping with GUI look and feel related stuff.    But it caused issues with developers also and as I recall, with VS2010, MS finally brought t back to earth with the promotion of a "Better VS98" to give developers back control in their distribution and the Manifest was less if an issue.  But overall, I didn't never wanted the overhead and stayed with VS98 compiles during the 2000s until vs2010 come out. 

    Do we need them, embedded or separate?  Can I safely disable them?  Why would I consider them today? 

    Code Certificates:

    Finally, code certificates.  Let me first note an opinion which you can skip if you don't care for it. <opinion> I believe it is morally wrong for MS to force developers to pay a 3rd party CA (Certificate Authority) to vouch for their software product for the following simple reasons;  1) It makes no sense to pay a 3rd party CA for a product they never saw. The 3rd party CA are not "trusting" your code because they saw or use your code.  They are "trusting" it because I paid them to vouch for it, and 2) If customers has been using my product for 5, 10, 15,  20+ years where they have installed it, maintained it and even programmed it, etc, why should they no longer trust it when the newer OSes now give the customer a "suggestion" that it is not "secured?."   If the product is already installed, the customer has "white listed" them via the Firewall popups, why is the OS still suggesting they were not signed and can't be trusted?  It is OK to express a concern if I can make a suggestion: MS should review their policy here. MS should be the Trusted Certificate Authority (CA), not a 3rd party CA.  After all, the code is running on Microsoft OSes.  If a MS developer has paid for VS PRO or the MSDN like I have in 2015,  the package should come with Microsoft Code Certification capabilities where the OS will do the "trust chain" back to MS, especially during an OS update where my product has not changed, the OS did. Yet, the OS doesn't white list the products and applications that are already installed.  It should at least prompt the user "Do you wish to white list/trust the programs/applications already installed?" or something like that with a checkbox list of installed applications.   Yes, it is odd and an extra complication, but the OS no longer trusting what is installed and used is odd and wrong and having to paid a 3rd party is even more odd and ethically, morally wrong.</opinion>

    So with that said, I  never used code certificates, but today, in the name of Marketing because it has nothing to do with Security, I perhaps need to consider it and worry about the "ethics" another time.   My question is how is it done when there are 125 exe, dlls?  Can this be done in a Post Build Batch file for each DLL and EXE or selected ones? Do I need to Code Cert each one?  Can I use a global certificate?  Or is that a CA related issue?  Does it have a Self-signed vs CA signed concept like a browser SSL cert, where the OS is going to say "This is self-signed.  Can't be trusted?"   Does it have an expiration date concept too?  That will be a tough one to swallow for have customers call me to say "Hey, all of the sudden Windows is saying Wildcat! can longer be trusted. Why?  Is there are virus or something?"   

    Overall, with my new 64 bit and non-XP work, I am trying to "catch up" to whatever is the new "Microsoft VS norm" is for Windows Oses to a limit.  Normally, I upgrade VS every 5 years, so 2020 will be the next one.  But today,  I need to set up the new production with VS2015.

    Thanks and your input would be greatly appreciated.  

    Hector Santos, CTO Santronics Software, Inc. http://www.santronics.com

    Monday, April 15, 2019 3:03 PM

All replies

  • I'm only going to respond in regards to the application manifests. The other two points are things that I don't really have to deal with on a regular basis. This is especially true with ASLR.

    Application manifests are always generated during a build and embedded by default. Windows uses the application manifest to determine whether a 32 bit application should be treated as a legacy application and can be used to enable optional features.

    For example, the application manifest is where you define that the application is Windows 7, 8, 8.1 and 10 compatible, and this will disable compatibility behaviour for these applications. If you want to opt into high DPI awareness, long path capable and more, the application manifest is where these settings end up.

    While some things can be enabled programatically, not everything can.

    Manifest files are very widely used even today, just not for the things that you seem to have bad memories about.

    This is a signature. Any samples given are not meant to have error checking or show best practices. They are meant to just illustrate a point. I may also give inefficient code or introduce some problems to discourage copy/paste coding. This is because the major point of my posts is to aid in the learning process.

    Monday, April 15, 2019 4:42 PM
  • Thanks.   Going thru some project recalls, there were one or more projects where DEP, DPI and UAC related items were considered and maybe I did keep the embedded manifest enabled.  Too far ago to remember the details but seems related to maintaining WIN32 compatibility when MS was adding many new security restrictions, in particular I recall  with the UAC stuff.  I do recall, we were forced to port one of the apps written in Borland C/C+ into VS C/C++ because of the Borland GUI engine was considered insecure, either DEP or DPI wise.  Don't recall.

    Anyway, what's you seem to be suggesting is the keep the manifest defaults for maximum support.


    PS: It was't bad memories for me because I avoided the "DLL Hell" era as it was well-known for:

    https://www.google.com/search?q="DLL Hell"

    Hector Santos, CTO Santronics Software, Inc. http://www.santronics.com

    Monday, April 15, 2019 8:37 PM
  • Code signing and certificates are not about security.  They are about legal liability.  If your application causes damage or injury in some way, the injured party is going to look for someone to sue.  If your code is unsigned, you could say "well, the file was modified later, so this isn't the code I wrote."

    But with code signing, there is a path to find you.  The CA is not validating your code, they are saying "this person is who he says he is, and can be reached here."  That will stand up in court.  And because the signature includes a checksum, the injured party can attest that the executable binary is in exactly the same condition it was when it left your building, and therefore the damage it caused is entirely attributable to you.

    Morals and ethics are not involved.

    Tim Roberts | Driver MVP Emeritus | Providenza &amp; Boekelheide, Inc.

    Monday, April 15, 2019 9:34 PM
  • Digital signing is always about Security and Trust.   

    Security from the standpoint that the signed entity integrity remains intact (not tampered) and you don't need a 3rd party to verify the integrity.   Once the entity integrity is verified,  what remains is trust.  There are two types of certificates:  Self-signing certificates and 3rd party "chain" signing of your self-signed certificate with a 3rd party CA called the Intermediate Certificate.   The 3 way provides a "Trust" concept because the certificate processor (OS, Browser, etc) has an agreement with the trusted CA to LOOK UP (3rd party chain) the certificate against the CA.  It might even do OCSP (Online Certificate Status Protocol) which is increasingly becoming the norm.  But it doesn't have to.  There is no other reason for this but to see if your certificate expired or been revoked.  I can pay Microsoft to add our Intermediate Certificates their server.  So it's really about money.

    In general,  when there are discrepancies among any of these concepts, there could be liability issues, but that applies to anything and you don't need certs for this.  Anyone can sue anyone.  Push comes to shove,  Vendors can sue Microsoft for labeling their software as UNTRUSTED if not code signed. 

    All it will offer you is a small little item that perhaps FRAUD or SPOOFING took place.  But again, you can do integrity verification without a 3rd party.  I would hope the EXE/DLL will fail to load by the OS if its checksum or integrity failed. I suppose that can be hacked and this is where a self-signed coded exe/dll will help.

    A key point to remember is this is an optional concept and in some industries, it MAY be a requirement.  But I serve the PCI business and as of yet, there is no such concept.  My applications are not required to be code certified, how do I know?  My PCI customers are not requiring my code to be signed.  They are not asking for it.  Their PCI Auditors are not asking for it.  If they did, they would probably be some lawsuits!! <g>

    It is not a bad idea to do checksum and possible code signing to help with the integrity, but it should not be a requirement that it be signed by a 3rd party.  That can be costly and when it comes to possibly using self-signed code certs, my interest is how will the OS react to them?  So why am I considering code signing, Marketing.  That's it.   But right now, non-participation is probably the best option. Don't try to code sign 128 objects!!!   

    Thanks for your comment.   

    Hector Santos, CTO Santronics Software, Inc. http://www.santronics.com

    Tuesday, April 16, 2019 12:57 AM
  • For the base address question, these days there is generally more to the equation than just the base address.

    The base address of the library is the preferred base address for that library. For DLLs, they were historically built with relocation information to allow them to be relocated if Windows couldn't load them at that address. So even if you had set the base address of a DLL then it was possible that the library wasn't loaded there.

    The 32 bit addresses that you listed in your original post showed that you set the libraries to load at around the 128MiB point in the address space which is often where the default process heap will go, and so I would be surprised if your libraries loaded at those addresses often anyway.

    With the release of Windows Vista, Windows introduce ASLR, and any DLL with the /DYNAMICBASE option set would be subject to ASLR. This became a default option at some point and is definitely default in Visual Studio 2015, so it is very unlikely that your DLLs are loading in at those addresses now anyway, unless you explicitly disabled /DYNAMICBASE.

    With relation to debugging, the preferred base address of the DLL wouldn't be a consideration in the minidump file because of the possibility of the DLL being relocated. This means that the actual base address for all modules would be stored in the minidump file. So no, setting the base address for the DLL these days wouldn't really be that useful.

    What's more, for 64 bit Windows, there is HEASLR (High Entropy ASLR). You can control this with the /HIGHENTROPYVA option but is enabled by default for 64 bit builds. One of the problems with ASLR is that the amount that the executable image (since this applies to executables and DLLs) could move was limited. However, HEASLR could move the image anywhere in the 64 bit user address space, which is currently 128TiB. So this problem with ASLR has been mitigated a lot.

    So really there isn't much to consider these days, for 64 bit libraries, the base address is pointless to care about since HEASLR makes this redundant. Since the minidump files contain the actual library base address then you can really just enable the debug symbols and not care about anything else.

    This is a signature. Any samples given are not meant to have error checking or show best practices. They are meant to just illustrate a point. I may also give inefficient code or introduce some problems to discourage copy/paste coding. This is because the major point of my posts is to aid in the learning process.

    Tuesday, April 16, 2019 6:32 PM
  • Thank you for your technical input.  You have been very helpful. 

    Yes, you are right, the base location was technically not a guarantee.  They worked very efficiently,  early on, during the early 16 bit to 32 bit 95/NT days, the locations selected and used (normally in sequence) were the "best, non-interferring, non-conflictive" locations, and they were almost, if always, reliable locations.  In other words, when it was required for a support incident, bug hunting, with the GPF, Dr. Watson logs, with our COD, ASM, when needed, it provided the right location of the DLL to focus on.   But as you said, it was not a guarantee, especially as the OSes progressed and new virtual memory management strategies evolved, along with the idea of a "fixed location" also a threat entry point, it was no longer really needed for debugging purposes.  With modern debugging "overhead" techniques in the code today,  using your own exception traps/logs, etc, I had stopped the practice with newer modules. 

    However, I haven't removed them from the updated 32 bit configurations and the 64 bit clones from the 32 bit, still has them set. So I feel, for sure, it is more than likely going to ignore anyway at run time.  I will remove them from the configurations at some point. 


    Hector Santos, CTO Santronics Software, Inc. http://www.santronics.com

    Saturday, April 20, 2019 8:34 PM