Skip to content
Advertisement

Tag: deep-learning

How to train my own image dataset for text recognition and create the trained model for use in OCR [closed]

Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago. Improve this question I created image data set including 62992 images with 128x128px resolution that contains characters, numbers and symbols with four kinds

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: [None, 2584]

I’m working in a project that isolate vocal parts from an audio. I’m using the DSD100 dataset, but for doing tests I’m using the DSD100subset dataset from I only use the mixtures and the vocals. I’m basing this work on this article First I process the audios to extract a spectrogram and put it on a list, with all the

I keep getting ValueError: Shapes (10, 1) and (10, 3) are incompatible when training my model

Turning the number of inputs when I call makeModel from 3 to 1 allows the program to run without errors but no training actually happens and the accuracy doesn’t change. Answer LabelEncoder transforms the input to an array of encoded values. i.e if your input is [“paris”, “paris”, “tokyo”, “amsterdam”] then they can be encoded as [0, 0, 1, 2].

Advertisement