Re-Architecting Multi-Queue-Multi-Windows-Service-Multi-Web-Service project to Azure RRS feed

  • Question

  • Good Day,

    I am working to re-architect an existing system into a cloud based system.  Right now, this system is a combination of many different elements, and I am trying to verify which Azure components should be used to replace our existing components.

    Here is the existing flow:

    1. A job queued via a WebAPI (Rest) into a REDIS based FIFI Queue

     - Requests to process a job are created from an end user website when a user creates or manually requests a job to be run.

     - Requests to process a job are also created by "scheduled" tasks which are saved in a SQL DB and queued via batch job that runs every x minutes to check for jobs that need to process

    2. A processor service (Windows Service) pickups a queued job and moves it to the in-process queue (REDIS used to detect stalled processes)

     - Status updates are posted back to REDIS so that the website can display real time job status

     - Jobs can take anywhere from a minute to over an hour to process depending on size, and speed of external resources outside our control

    3.  Processor service will execute an external call (right now I am testing this call as a Azure Function), the results of this call are processed by the service and will generally provide a number of identical external calls.  (i.e. A web page is downloaded and processed for links, the links are then checked, and if they are new links that haven't been processed, the job will request downloads for those links, and start the process over)

     - Downloaded data is maintained in memory for rules processing to generate reports

     - Downloaded data is saved to document storage for later retrieval and comparison

    4.  Once all sub-tasks are complete, the job report is saved to SQL Server, status updated in Redis as complete, and job removed from processing queue.

    The part I am not sure specifically how to handle in Azure is the Windows Service piece.  This is really the part of our process that is the weak point.  If the service dies for some reason, someone has to manually notice that and restart the server/service.  

    I would like to accomplish 2 primary things in re-architecting this:

    1.  Enable Azure to detect that something has resided in the queue longer than say 2 minutes without being picked up and then automatically fire a new processor off.

    2.  Enable Azure to automatically scale resources up and down to handle the current load.

    Now I could make the processor an Azure Function and setup a batch or something to fire every x number of seconds to check for queue objects and fire a new call to the Azure Functions off.  However the Azure Functions are limited to 1.5GB of RAM usage, and my jobs often run over this significantly (Up to 8GB or so).

    Is there something else out there that it would make sense to use, that will enable a more "serverless" architecture?

    Wednesday, August 2, 2017 8:00 PM

All replies

  • A web job using a queue trigger sounds like it would do what you want.


    Friday, August 4, 2017 8:31 PM