1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Cause of validation accuracy and loss decreasing in tandem

Discussion in 'Computer Science' started by sousdev, Oct 8, 2018.

  1. sousdev

    sousdev Guest

    I am training a 3-layer RNN over 50 epochs. I have created it using 256 cells and .3 dropout on each layer. The learning rate is 0.0001. The issue I am facing is that the validation accuracy is decreasing a lot even while train and validation loss are decreasing. Which seems counter-intuitive to my (very limited) knowledge.

    What could be the cause of such a large drop off in accuracy?

    [​IMG]

    Login To add answer/comment
     

Share This Page