none
access shared memory in finalizer RRS feed

  • Question

  • I have a Windows desktop application that requires the coordination of a number of processes (not threads) on the local machine. I plan to achieve that coordination using shared memory managed by the MemoryMappedFile class introduced in .NET Framework 4.

    When a process exits, I need to reliably “sign-off” of the interaction by writing a value to the shared memory. I thought to use the IDisposable pattern with a finalizer to achieve that, but as I’ve studied how finalizers work I’m not sure my plan is sound.

    As I understand it, it is unsafe to access managed objects in a finalizer because there is no guarantee of the order in which GC will clean up objects. So in my case, the MemoryMappedFile object may be cleaned up prior to my finalizer using it.

    Is my analysis correct? If so, what alternatives are available in managed code to reliably update a data structure in shared memory if my process shuts down unexpectedly? Is another means of interprocess communication preferable for some reason?


    • Edited by MarkB Zvilius Thursday, April 4, 2013 12:18 AM remove unnecessary sentence
    Thursday, April 4, 2013 12:16 AM

Answers

  • If I understand your suggestion correctly, I would have a "master" process that would launch the sub-processes and handle the subprocess exit by subscribing to the Process.Exited event.

    I want to avoid having "one process to rule them all." It increases complexity, reduces flexibility, and introduces its own level of brittleness: e.g. what happens if the master task crashes? My preference is to have all the processes be peers.

    Here is the solution I've been thinking of: use P/Invoke and the Win32 CreateFileMapping API to get a handle to a shared memory region. Then in managed code I can use a SafeHandle-derived class to manage that handle. In ReleaseHandle() I can perform the shared memory write that "signs off" the process before closing the handle to the shared memory.

    Since SafeHandle.ReleaseHandle() is called from the SafeHandle finalizer, which is marked as a critical finalizer, it should be extremely reliable.

    It seems unfortunate to have to write my own P/Invoke code to use the Win32 shared memory API when there is a .NET class that does it already (MemoryMappedFile), but I don't see a better solution at this time. Still hoping someone here has a better idea!

    Thanks,

    Mark

    • Marked as answer by MarkB Zvilius Friday, April 12, 2013 6:55 PM
    Tuesday, April 9, 2013 10:53 PM

All replies

  • Hi Mark,

    >>So in my case, the MemoryMappedFile object may be cleaned up prior to my finalizer using it.

    When you know GC will collect it, you don't need to Finalize it any more. If you want to explicitly release the resource, you should implement the dispose method, rather than Finalize.

    I hope this is clear.

    Best regards,


    Mike Feng
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    Thursday, April 4, 2013 7:07 AM
    Moderator
  • Mike,

    You have not addressed my situation. It's not a matter of releasing the MemoryMappedFile object; I need to use the MemoryMappedFile object to perform a write to shared memory as the process is ending. The purpose is, essentially, to communicate to the other processes that this process is ending.

    To reiterate: "When a process exits, I need to reliably “sign-off” of the interaction by writing a value to the shared memory."

    Thanks,

    Mark

    Thursday, April 4, 2013 5:03 PM
  • Hi Mark,

    Sorry for misunderstanding.

    So how about this event: http://msdn.microsoft.com/en-us/library/system.diagnostics.process.exited.aspx  

    It will fire when process exited, so you can take your actions.

    Best regards,


    Mike Feng
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    Tuesday, April 9, 2013 4:58 AM
    Moderator
  • If I understand your suggestion correctly, I would have a "master" process that would launch the sub-processes and handle the subprocess exit by subscribing to the Process.Exited event.

    I want to avoid having "one process to rule them all." It increases complexity, reduces flexibility, and introduces its own level of brittleness: e.g. what happens if the master task crashes? My preference is to have all the processes be peers.

    Here is the solution I've been thinking of: use P/Invoke and the Win32 CreateFileMapping API to get a handle to a shared memory region. Then in managed code I can use a SafeHandle-derived class to manage that handle. In ReleaseHandle() I can perform the shared memory write that "signs off" the process before closing the handle to the shared memory.

    Since SafeHandle.ReleaseHandle() is called from the SafeHandle finalizer, which is marked as a critical finalizer, it should be extremely reliable.

    It seems unfortunate to have to write my own P/Invoke code to use the Win32 shared memory API when there is a .NET class that does it already (MemoryMappedFile), but I don't see a better solution at this time. Still hoping someone here has a better idea!

    Thanks,

    Mark

    • Marked as answer by MarkB Zvilius Friday, April 12, 2013 6:55 PM
    Tuesday, April 9, 2013 10:53 PM
  • Hi Mark,

    >>If I understand your suggestion correctly, I would have a "master" process that would launch the sub-processes and handle the subprocess exit by subscribing to the Process.Exited event.

    Yes, that is what I mean.

    >>I want to avoid having "one process to rule them all."

    If you don't start a new process in a "master" process, so you can just write your log at the application ending. Take a winforms application for example, you can log your logs at the form_close event. For a console application, you can log your logs at the last line of the static Main method.

    I hope I understand you correctly.

    Best regards,


    Mike Feng
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    Wednesday, April 10, 2013 4:56 PM
    Moderator
  • "a winforms application for example, you can log your logs at the form_close event. For a console application, you can log your logs at the last line of the static Main method."

    You're missing one key word from my earlier posts: "reliably." I am trying to build a mechanism that will be reliable in the face of unhandled exceptions and other forms of unexpected process termination.

    Neither Form.Closing for WinForms or "the last line of Main" for Console apps is guaranteed to be executed. That's why my thoughts turned toward using a finalizer as the most reliable solution available: the GC is going to come along and pick up the pieces no matter how the process terminated.

    Am I wrong?

    Wednesday, April 10, 2013 6:02 PM
  • Hello Mr. Zvilius, may I give it a try?

    In your case i would recommend you the use of a named pipe. Pipes are often used for inter process communications between a master and child processes, but can also be used for other processes.

    Either, you use a named pipe, or you invent a mechanism to notify your peer processes about the pipe's handle. You should make sure that your other processes are privileged enough, to access the pipe though.

    The basic idea is, to check whether the pipe already exists. If that's not the case, create it. Afterwards launch a handler thread which will listen to the pipe and process the messages on it.

    At the start of your process, create or attach to the pipe and keep a reference as a local variable (maybe even static) and mark it as volatile.

    Your concerns about the GC were right, but there ways to prevent it. I'll mention here two.

    1. When you're using C#, put your code in a using statement with the pipe reference.
    2. Use a static volatile value in your process entry point. (Not recommended, but possible)

    Now to your reliability problem. At first, what you try to achieve isn't possible without an observer process as far as i know. Even if you manage to prevent the CLR from terminating the process to prematurely, you can't rest assured that Windows won't kill the process. In fact, it is possible that windows just wipes out the process because of a system exception (for example the well known Access Violation that causes the nice Problem Dialog) which the CLR hasn't handled because you have a bad setting in the application's manifest, or even a simple Crtl + C in a console application, . It may be even possible that someone kill's the CLR-Thread and boom, all breaks.

    I would recommend, to read something about AppDomains and how the SQL-Server handles this problem.

    If you then still need the processes, then make a watchdog system. Create a thread that writes every few milliseconds a little: "Hey, I'm alive! Process-ID: xyz!". So you would even detect a hanging watchdog thread in a process. This is what an observer process does. It just observes. If a process crashes, take some action. If the observer crashes, just restart it. So you wouldn't have a master process and the pipe stays open as long as at least one handle to it is open.

    Another method is using a class deriving from the Critical Finalizer class, but this would require a C++/CLI library to efficently manage the pipe. Also, you will encounter various problems because in a critical finalizer you have heavy restrictions. For example, you may not allocate a single byte! If you violate one of those rules (stated here under: Code Not Permitted in CERs), the process will immediately be terminated! Also if you're taking to long, the process will be terminated and this will be the case when another process writes to the pipe. The same problem goes for memory mapped files.

    I hope you can follow my thoughts and find a solution.

    Greetings, Intel-x86

    Wednesday, April 10, 2013 10:29 PM
  • Thanks for your ideas.

    Actually, I think your solution dovetails fairly well with what I suggested earlier.

    You suggest using named pipes for interprocess communication (IPC), and I plan to use a named, non-persisted memory mapped file ("shared memory") for IPC .. along with a named mutex for synchronization. In my case, where there are N peer processes that start and stop asynchronously, and communication is more "broadcast" than point-to-point, I think the shared memory is more appropriate. Is there some specific advantage of pipes that causes you to recommend that form of IPC over shared memory?

    Regarding reliability you suggest:

    "Another method is using a class deriving from the Critical Finalizer class, but this would require a C++/CLI library..." This is essentially what I proposed earlier: use P/Invoke to access the Win32 shared memory API and hold the handle in a SafeHandle-derived object.

    You also suggest having an observer process of some sort .. essentially my own "garbage collector." That suggestion is well taken, but I probably won't go that far. I need to make a reasonable trade-off between complexity (cost) and reliability. My goal is to find a way to keep my IPC coherent in the face of typical sorts of crashes, e.g. unhandled exception, while keeping the solution as lightweight as possible.

    I appreciate the discussion,

    Mark

    Thursday, April 11, 2013 5:45 PM
  • Hi Mark,

    I suggest you to use Pipes because it takes you off a lot of work and eliminates a lot of debugging. Named Pipes provide you with the shared memory as a stream, the named Mutex and Pipes are the best "broadcast" channel you can get for a specific process set.

    Use a thread to listen to the pipe. There can multiple processes connect to the pipe. If one process writes to the pipe, all the other connected processes can read this data too. Mutexes are rather complex to get right and don't cooperate very well with a Critical Finalizer CER which I'll explain later as a solution for your reliability problem.

    What i would like  to say is: Don't reinvent the wheel. Don't bug your head with mutual exclusion and ensuring that each shared memory access is prepended with a mutex check. The Pipe's provide exactly that tool you want. Also, the pipe's mutual exclusion is much faster so better for use in a critical finalizer. Also you haven't to do any mutual exclusion because the underlying Win API will do that for you. This also reduces risks that you'll run into problems because of unproper mutual exclusion or read/write accesses that aren't atomic.

    A lightweight and easy solution is to derive the class with the static Main function from the CriticalFinalizerObject and implement a finalizer. Prepare an exit message byte buffer in advance and try then to write the message to the pipe. Just make sure you pay attention to the rules stated under: Code Not Permitted in CERs on this page here.

    There is one little problem with every solution though: If the process that created the pipe or allocated the shared memory terminates, it becomes a bit complicated. You must periodically check whether the pipe's still connected because the server process could have been terminated.

    The best way would be to encapsulate the whole functionality in an own thread. If the server process crashes, a thread may propose take over. IPC or ITC needs all together synchronization features but it's possible.

    If you need a sample or some visual explanation, feel free to ask.

    Best regards, Intel-x86

    Thursday, April 11, 2013 8:08 PM
  • Thanks again for the discussion and your opinions.

    Based on your strong advocacy for named pipes, I did some further research on pipes. I have concluded that pipes are not the appropriate IPC solution for my application for the following reasons.

    • My application requires random access to the shared data. The pipe abstraction presents the data as a stream.
    • My application does not require all processes to be notified when the shared data changes, which would be an advantage of a pipe.
    • My application cannot extend to processes running on remote machines, which would be an advantage of a pipe.
    • My application is inherently peer-to-peer: I don't know which set of processes may be running at any given time, I dont' know which will start first or which will end last. The pipe API is designed around a client-server model.
    • Using pipes would not simplify the implementation of the reliability requirement over my shared memory proposal. In fact it would be more complex because a pipe has an inherent concept of ownership, whereas the shared memory object will naturally stay open as long as anyone holds a handle for it.

    Thanks again for your thoughts. I have spent enough time researching and discussing this, and am going to mark my proposal above as the answer.

    Cheers,

    Mark Z.

    Friday, April 12, 2013 6:54 PM
  • Hi Mark,

    I'm glad you found a solution for your problem.

    In your case it's really better to use shared memory, but the posts before have let me guess that you need the memory to notify the other processes of a termination/creation.

    I think you'll take the Critical Finalizer solution for your reliability issue.

    Please, if you don't mind, make a little statement how you did solve this case and mark that post as an answer too. It would be great for other users too.

    Greetings, Intel-x86

    Friday, April 12, 2013 7:03 PM