none
Is is possible to have time control below one millisecond in drivers? RRS feed

  • Question

  • Hello, I am developing a driver which requires to deal with I/O at a rate of about 100 microseconds.
    The problem is sometimes the system is interrupted and my driver gets over one millisecond of delay.
    I tried almost all synchronization objects to keep the output sending at around 100 microseconds, but I had no luck.
    The hardware I work with has limited buffer, and with several one millisecond interruptions it gets overflowed eventually.

    Is there a way I can keep my driver sending output at a constant rate of about 100 microseconds without being interrupted by the system?

    The communication is done via Ethernet and I am using a protocol driver based on the NDIS 6.0 Connectionless sample.

    Wednesday, November 26, 2014 7:23 PM

Answers

  • No there is no way to do this, for that matter with ethernet collisions and retry's I am not sure you can guarantee this level of performance across the wire.  This is really a poor hardware design in these days when you can create smart devices with a fair amount of buffering in them.

     

    Don Burn Windows Filesystem and Driver Consulting Website: http://www.windrvr.com

    Wednesday, November 26, 2014 7:35 PM

All replies

  • No there is no way to do this, for that matter with ethernet collisions and retry's I am not sure you can guarantee this level of performance across the wire.  This is really a poor hardware design in these days when you can create smart devices with a fair amount of buffering in them.

     

    Don Burn Windows Filesystem and Driver Consulting Website: http://www.windrvr.com

    Wednesday, November 26, 2014 7:35 PM
  • This is not poor hardware design, the hardware is for live recording and streaming. It needs to be fast.

    The driver we developed for Mac IOs deals just fine with the hardware.

    I would say that nower days with quadcore CPUs, Windows should find a way to let I/O drivers reside in a core which would not be affected by system interruption directly. The system can run in the other three cores.
    Its just an idea for Microsoft, dont take it bad.

    So, in this situation then, our streaming will be at least one millisecond delayed (at the best case scenario). Which is the limit of our ears detection... not so good...
    Wednesday, November 26, 2014 7:49 PM
  • Sorry, but the claims "it works on a given OS, but not Windows" have been proved to be false over and over again in my 40 years of driver development.  I''ve gotten hardware that quote works fine on Linux or Mac only to be able to go to those systems and show it will break there also.  This is crappy hardware design, I've seen people try to make this work by playing a ton of games with the process and interrupt affinity, but in the end, if you do this, there is nothing stopping some other device or application thinking they can do the same, at which point your "take a core" model fails miserably.  My favorite on these situations is when the sales guy sells N of your devices all of which go into a single system with <N cores.

    Fix your hardware.


    Don Burn Windows Filesystem and Driver Consulting Website: http://www.windrvr.com

    Wednesday, November 26, 2014 7:56 PM
  • I understand your position to protect Microsoft, but in MAC IT DOES WORK!!! (and with only six samples latency)

    But I wont get into that discussion, I will give you a chance to propose a solution if any...

    So a bit of background beforehand. Our ears start detecting sound delay at a delay of about one millisecond, at 1,2ms most ears will be able to detect it and with a delay above 1,5ms the sound is just unbarable. So... I need to record input sound and process the output sound with a delay lower than 1ms. Whats your proposal?

    With Windows it is just not possible is it? Because any single interruption will kill the delay. And the problem goes way beyond, because the interruptions happen about 100 to 200 times a second.

    Wednesday, November 26, 2014 8:33 PM
  • I'm not here to protect Microsoft, I don't work for them or recieve anything from them.  But I have been writing device drivers for over 40 years, and I can tell you that having hardware with this type of delivery requirement, is a poor design.  It will always have failure modes,, no matter what the OS is. 

    My proposal would be the same one as I have told designers for a long time, if you put in time critical interfaces, sooner or later it will bite you, it doesn't matter if it was DEC systems in 1970, Sun workstations in the 1980's or PC for the last 20 years, put in a critical time requirement without enough hardware support, and you will fail.


    Don Burn Windows Filesystem and Driver Consulting Website: http://www.windrvr.com

    Wednesday, November 26, 2014 8:40 PM
  • Yes, this is possible, but it will take a fair amount of work. The trick is to change the thread and interrupt affinity masks of everything in the system so that they are not serviced on one particular CPU (let's call it the real-time CPU). Your code will have its affinity set so that it only runs on the real-time CPU. You then need to write your code to run continuously and use polling for all of its communications with other entities. This technique is the only way to get this to work. Also, be aware that due to general PC hardware limitations, you cannot get time service at less than 1 millisecond.

     -Brian


    Azius Developer Training www.azius.com Windows device driver, internals, security, & forensics training and consulting. Blog at www.azius.com/blog

    Wednesday, November 26, 2014 9:11 PM
    Moderator
  • I see what you are saying Don. But better hardware implies higher costs. To explain what I ve said before in more detail, comparing two similar i5 processor desktops, Windows 7 has a worst case senario of 2,3ms on receiving and delivering NDIS packets and the MAC a 400us worst case. For these time critical applications MAC does seem to have an architecture with better performance of about eight times (so far at least, will try what Brian Catlin proposed).

    Brian, in the case that I apply the changes to the interrupt affinity masks, do I still use the Event objects for synchronization? Could you give me a bit more details on your idea or point me to some material I could study?

    Thanks

    Tuesday, December 16, 2014 6:43 PM
  • Yes, you can still use event objects.

    The concept is pretty straightforward as I described, and as I mentioned, it will take a bit of work. There is some old documentation out there, such as this. There was (maybe still is) a company named VenturCom that had a product that worked this way (I consulted on its design). There may still be references to it in MSDN.

     -Brian


    Azius Developer Training www.azius.com Windows device driver, internals, security, & forensics training and consulting. Blog at www.azius.com/blog

    Tuesday, December 16, 2014 6:50 PM
    Moderator