none
Test Run - Deep Neural Network IO Using C#

    General discussion

  • Many of the recent advances in machine learning, like making predictions using data, have been realized using deep neural networks (DNNs). James McCaffrey introduces you to DNNs and explains how they work.

    Read this article in the August 2017 issue of MSDN Magazine

    Tuesday, August 1, 2017 7:35 PM
    Owner

All replies

  • Great article as always by doctor McCaffrey. The code is missing - hope its fixed soon :)
    Wednesday, August 2, 2017 9:43 AM
  • Great so far, since i dived into NN these topics at MSDN magazines gave great insights into using NN.
    Well looking forward to the next chapters, about training Deep neural nets.
    Great news that James McCaffrey also plans to write about the non image clasifying DNN as well.
    His way of writing also encourages people to wonder about not only the how it works, but also why its done and sometimes his concerns, that's inspiring.

    The code i assume will get in later, as he's finishing the follow up?
    Wednesday, August 2, 2017 9:35 PM
  • Great article as always by doctor McCaffrey. The code is missing - hope its fixed soon :)
    Agreed, very nice article and easy to follow.  I would love to get the code to try out some sample runs.  Also wouldn't mind reading about some other AI articles from Dr. McCaffrey.  
    Monday, August 7, 2017 1:41 PM
  • Great article, but it leaves several unanswered questions:

    1. Why did he chose a network with 3 hidden layers? Why not 2, or 4, or 100?
    2. Why did he choose the 4-2-2 hidden network structure? Are there some "magic" properties with this particular structure? Why not 4-4-4, or 2-3-2, or 64-29-46?
    3. What are the advantages/disadvantages of having more or fewer nodes?
    4. What is the significance of choosing a specific activation function? Would just any ol' function that constrains its output to [0-1] work just as well (or is that constraint even necessary)? Do all nodes (except the output nodes) have to use the same activation function?
    Monday, August 14, 2017 8:06 PM
  • Yes, Thanks for introducing me to the subject Dr. McCaffrey!

    In case anyone's interested,  here's a "For Dummies" -style article on the subject that I also found helpful.

    https://medium.com/technologymadeeasy/for-dummies-the-introduction-to-neural-networks-we-all-need-c50f6012d5eb

    • Edited by OneLChela Monday, August 14, 2017 8:44 PM Added an article link for helpfulness
    Monday, August 14, 2017 8:28 PM
  • As far as I can tell,  the number of nodes/inputs/outputs was random (and small) just to give us a demo... However since I'm new to DNNs, I wouldn't know if that combination is common.

    When the code is out, I plan to adjust it to accept any number of nodes/inputs/outputs.

    Monday, August 14, 2017 8:33 PM
  • I think the article still misses a training a method.

    Like one of his earlier years a deep network is discussed without training part.
    Its   quite essential to a neural network to train it.

    So will this article get an update ?, or a follow up ?.
    Monday, August 28, 2017 2:30 PM
  • The author indicated the training (the back-propagation algorithm) would be explained in detail in a 'future article'.

    The Kwisatz Haderach of the Pixel Syndicate

    Tuesday, August 29, 2017 2:16 PM
  • Please pardon the trivial proof-reading comment but:

    "Because there’s one bias for reach hidden and output node, "

    should be

    "Because there’s one bias for EACH hidden and output node,"

    and

    "The softmax of three arbitrary values, x, y, y is:" should be

    "The softmax of three arbitrary values, x, y, Z is:"

    Tuesday, September 5, 2017 9:06 PM
  • Hey all! Just saw the Back-Propogation 'training' article on MSDN Magazine @ Test Run - Deep Neural Network Training ... code download is also available for C#.

    Wil



    The Kwisatz Haderach of the Pixel Syndicate

    Wednesday, September 6, 2017 2:28 PM
  • The first diagram is missing some detail referred to in the article ("output[1] has a bias value of 0.36").  The label for the bias value on output[1] is actually missing.
    Thursday, October 19, 2017 7:48 AM