No matter how many epochs I use or change learning rate, my validation accuracy only remains in 50’s. Im using 1 dropout layer right now and if I use 2 dropout layers, my max train accuracy is 40% with 59% validation accuracy. And currently with 1 dropout layer, here’s my results: Again max, it can reach is 59%. Here’s the
Tag: machine-learning
Python: How to retrive the best model from Optuna LightGBM study?
I would like to get the best model to use later in the notebook to predict using a different test batch. reproducible example (taken from Optuna Github) : my understanding is that the study below will tune for accuracy. I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a
Concatenate three inputs of different dimensions in Keras
I have two inputs of same size and then applied word embeddings of vector size 128 and then reshape it giving both inputs shape of (none,1,128), another input which is context has dimension (none,1,18), I want to concatenate these three inputs and then feed the combined output to an LSTM layer. But I am unable to concatenate the inputs as
Python: Develope Multiple Linear Regression Model From Scrath
I am trying to create a multiple linear regression model from scratch in python. Dataset used: Boston Housing Dataset from Sklearn. Since my focus was on the model building I did not perform any pre-processing steps on the data. However, I used an OLS model to calculate p-values and dropped 3 features from the data. After that, I used a
How to reproduce the Bottleneck Blocks in Mobilenet V3 with Keras API?
Using Keras API, I am trying to write the MobilenetV3 as explained in this article: https://arxiv.org/pdf/1905.02244.pdf with the architecture as described in this picture: For that, I need to implement the bottloneck_blocks from the previous article https://arxiv.org/pdf/1801.04381.pdf. See image for architecture: I managed to glue together the Initial and final Conv layers: Where the bottleneck_block is given in the next
How to generate accurate masks for an image from Mask R-CNN prediction in PyTorch?
I have trained a Mask RCNN network for instance segmentation of apples. I am able to load the weights and generate predictions for my test images. The masks being generated seem to be in the correct location, but the mask itself has no real form.. it just looks like a bunch of pixels Training is done based on the dataset
Neural Network Results always the same
Edit: For anyone interested. I made it slight better. I used L2 regularizer=0.0001, I added two more dense layers with 3 and 5 nodes with no activation functions. Added doupout=0.1 for the 2nd and 3rd GRU layers.Reduced batch size to 1000 and also set loss function to mae Important note: I discovered that my TEST dataframe wwas extremely small compared
How to plot output from marching_cubes_lewiner in python?
I’ve been able to use the lewiner marching cubes algorithm in python. It outputs vertices, faces, and other attributes. I want to be sure that it is working correctly, so I’d like to plot a 3D image of what the function returns. However, I have not had any success so far. I have tried the following: Successful retrieval of necessary
Gensim LDA Coherence Score Nan
I created a Gensim LDA Model as shown in this tutorial: https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/ And it generates 10 topics with a log_perplexity of: lda_model.log_perplexity(data_df[‘bow_corpus’]) = -5.325966117835991 But when I run the coherence model on it to calculate coherence score, like so: My LDA-Score is nan. What am I doing wrong here? Answer Solved! Coherence Model requires the original text, instead of the
keras lstm error: expected to see 1 array
so i want to make a lstm network to run on my data but i get this message: ValueError: Error when checking input: expected lstm_1_input to have shape (None, 1) but got array with shape (1, 557) this is my code: Answer You need to change the input_shape value for LSTM layer. Also, x_train must have the following shape. So,