Input values return prediction with percentage [closed]

Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed yesterday. Improve this question Hi I have write machine learning use decision tree model. I create webapp that user can input and web will call to model by flask api and then show result on webapp but my result have only Yes/No. It possible if the result can show percentage how much this input will yes/no, Example Yes 76% Answer You could use predict_proba() as it

I keep getting ValueError: Shapes (10, 1) and (10, 3) are incompatible when training my model

Turning the number of inputs when I call makeModel from 3 to 1 allows the program to run without errors but no training actually happens and the accuracy doesn’t change. Answer LabelEncoder transforms the input to an array of encoded values. i.e if your input is [“paris”, “paris”, “tokyo”, “amsterdam”] then they can be encoded as [0, 0, 1, 2]. It is not one-hot encoding scheme which is expected by categorical_crossentropy loss. If you have a integer encoding you will have to use sparse_categorical_crossentropy Fix change your code loss to sparse_categorical_crossentropy : Sample

Why is my target value not the same when I print it out and calculate it with the coefs and intercept?

I have worked on a Polynomial Regression model to predict my target values. The thing is that my prediction , with the “predict” method, make sense, but when I calculate the target variable via the coefs and intercept I get a value way far from the given values of the predict method. If I calculate the y value for x = 2 via the coefs and intercept, I obtain a value around 90. [[ 0.00000000e+00, 4.66507179e+00, -7.69101941e-01 ,-5.47401755e-01, 2.92321976e-01, -5.57600284e-02, 5.44143396e-03, -2.91464609e-04, 8.16565621e-06, -9.36811416e-08]][[0.99640058]] Answer In Polynomial Transform The value of the variable gets converted in such a way that

Multiclassification task using keras [closed]

Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed last month. Improve this question Classification (not detection!) of several objects in one image is the problem. How can I do this using keras. For example if I have 6 classes (dogs,cats,birds,…) and two different objects (a cat and a bird) in this image. The label would be of the form: [0,1,1,0,0,0] Which metric, loss function and optimizer is recommended? I would like to use CNN.

validation accuracy not improving

No matter how many epochs I use or change learning rate, my validation accuracy only remains in 50’s. Im using 1 dropout layer right now and if I use 2 dropout layers, my max train accuracy is 40% with 59% validation accuracy. And currently with 1 dropout layer, here’s my results: Again max, it can reach is 59%. Here’s the graph obtained: No matter how much changes I make, the validation accuracy reaches max 59%. Here’s my code: Im very confused why only my training accuracy is updating, not the validation accuracy. Here’s the model summary: Answer The size of

Neural Network Results always the same

Edit: For anyone interested. I made it slight better. I used L2 regularizer=0.0001, I added two more dense layers with 3 and 5 nodes with no activation functions. Added doupout=0.1 for the 2nd and 3rd GRU layers.Reduced batch size to 1000 and also set loss function to mae Important note: I discovered that my TEST dataframe wwas extremely small compared to the train one and that is the main Reason it gave me very bad results. I have a GRU model which has 12 features as inputs and I’m trying to predict output power. I really do not understand though

keras lstm error: expected to see 1 array

so i want to make a lstm network to run on my data but i get this message: ValueError: Error when checking input: expected lstm_1_input to have shape (None, 1) but got array with shape (1, 557) this is my code: Answer You need to change the input_shape value for LSTM layer. Also, x_train must have the following shape. So, change to

Decision tree with different split criterion than information gain

I’d like to create a decision tree in python with a different split criterion than information gain, something like “1-information gain” (something like the opposite of impurity measure, like as similarity measure). Does already exist something like this? paper included. Thanks Answer Yes, it exists. There are many research papers: https://pdfs.semanticscholar.org/5e44/d49b2268421d7ddf09d68be9aa689359b772.pdf https://www.springerprofessional.de/en/splitting-method-for-decision-tree-based-on-similarity-with-mixe/16031946

sklearn roc_auc_score with multi_class==“ovr” should have None average available

I’m trying to compute the AUC score for a multiclass problem using the sklearn’s roc_auc_score() function. I have prediction matrix of shape [n_samples,n_classes] and a ground truth vector of shape [n_samples], named np_pred and np_label respectively. What I’m trying to achieve is the set of AUC scores, one for each classes that I have. To do so I would like to use the average parameter option None and multi_class parameter set to “ovr”, but if I run I get back This error is expected from the sklearn function in the case of the multiclass; but if you take a look

Mask R-CNN for object detection and segmentation [Train for a custom dataset]

I’m doing a research on “Mask R-CNN for Object Detection and Segmentation”. So I have read the original research paper which presents Mask R-CNN for object detection, and also I found few implementations of Mask R-CNN, here and here (by Facebook AI research team called detectron). But they all have used coco datasets for testing. But I’m quite a bit of confusing for training above implementations with custom data-set which has a large set of images and for each image there is a subset of masks images for marking the objects in the corresponding image. So I’m pleasure if anyone