Closed. This question is not about programming or software development. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the
Tag: neural-network
simple Neural Network gives random prediction result
I have been trying to build a simple neural network myself (3 layers) to predict the MNIST dataset. I referenced some codes online and wrote some parts my own, the code runs without any errors, but something is wrong with the learning process. It seems like the prediction result is all “random”. Applying the learning process to the network and
Mask layer is not working with MLPs, how to add a custom layer with masking?
I’m using MLPs to forecast a time series, I implement a code that contain a mask layer to let the model skip the mask values. for instance, in my data, the time series has a lot of NaN values, I fill it by a ‘value = -999’. I don’t want to remove it, but I want the Keras masking to
Adam Optimizer Not Working on cost function
I wanted to make own neural network for MNIST data set and for that using tensorflow i am writing the code imported library and dataset then done one hot encoding and after all done the weights and baises assignment and then done the forward propagation with the random values and for back propagation and cost minimization used a loss function
Tensorflow ValueError: Shapes (64, 1) and (1, 1) are incompatible
I’m trying to build a Siamese Neural Network to analyze the MNIST dataset, however when trying to fit the model to the dataset I encounter this problem according to which I have training data and labels shapes’ mismatch. I tried changing the loss function as well as tried to squeeze the labels array, and neither of “solutions” worked. Here are
AttributeError: module ‘keras.api._v2.keras.utils’ has no attribute ‘Sequential’ i have just started Neural network so help would be appriciated
Answer You should be using tf.keras.Sequential() or tf.keras.models.Sequential(). Also, you need to define a valid loss function. Here is a working example:
Difference between the calculation of the training loss and validation loss using pytorch
I wanna use the following code of this traditional image classification problem for my regression problem. The code can be found here: GeeksforGeeks-Training Neural Networks with Validation using Pytorch I can understand why the training loss is summed up and then divided by the length of the training data in this example, but I can’t get why the validation loss
Predictions become irrational after adding weights to the fit [closed]
Closed. This question needs debugging details. It is not currently accepting answers. Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question. Closed last year. Improve this question I have a model with several dense layers that behaves normally in all aspects.
Cannot run Carlini and Wagner Attack using foolbox on a tensorflow Model
I am using the latest version of foolbox (3.3.1), and my code simply load a RESNET-50 CNN, adds some layers for a transferred learning application, and loads the weights as follows. Now I would like to attack it using the foolbox 3.3.1 Carlini and Wagner attack, here is the way I load the model for foolbox My dataset is split
Why should the input_shape property of a Conv2D layer be specified only for the first Conv2D layer?
I am new to AI/ML stuff. I’m learning TensorFlow. In some tutorial, I noticed that the input_shape argument of a Conv2D layer was specified only for the first. Code looked kinda like this: In many examples, not only in the above, the instructor didn’t include that argument in there. Is there any reason for that? Answer The next layers derive