How can I select top-n elements from tensor without repeating elements?

I want to select top-n elements of 3 dimension tensor given the picked elements are all unique. All the elements are sorted by the 2nd column, and I’m selecting top-2 in the example below but I don’t want duplicates in there. Condition: No for loops or tf.map_fn() Here is the input and desired_output that I want: This is what I’m getting right now; which I don’t want! Here is what I actually want Answer This is one possible way to do that, although it requires more work since it sorts the array first. Note there is a kind of corner

I keep getting ValueError: Shapes (10, 1) and (10, 3) are incompatible when training my model

Turning the number of inputs when I call makeModel from 3 to 1 allows the program to run without errors but no training actually happens and the accuracy doesn’t change. Answer LabelEncoder transforms the input to an array of encoded values. i.e if your input is [“paris”, “paris”, “tokyo”, “amsterdam”] then they can be encoded as [0, 0, 1, 2]. It is not one-hot encoding scheme which is expected by categorical_crossentropy loss. If you have a integer encoding you will have to use sparse_categorical_crossentropy Fix change your code loss to sparse_categorical_crossentropy : Sample

validation accuracy not improving

No matter how many epochs I use or change learning rate, my validation accuracy only remains in 50’s. Im using 1 dropout layer right now and if I use 2 dropout layers, my max train accuracy is 40% with 59% validation accuracy. And currently with 1 dropout layer, here’s my results: Again max, it can reach is 59%. Here’s the graph obtained: No matter how much changes I make, the validation accuracy reaches max 59%. Here’s my code: Im very confused why only my training accuracy is updating, not the validation accuracy. Here’s the model summary: Answer The size of

CuDNN crash in TF 2.x after many epochs of training

I’m currently becoming more and more desperate concerning my tensorflow project. It took many hours installing tensorflow until I figured out that PyCharm, Python 3.7 and TF 2.x are somehow not compatible. Now it is running, but I get a really unspecific CuDNN error after many epochs of training. Do you know if my code is wrong or if there is e.g. an installation error? Could you please hint me a direction? I also didn’t find anything specific with searching. My setup [in brackets what I also tried]: HW: i7-4790K, 32 GB RAM and GeForce 2070 Super 8GB OS: Windows

In Keras, can I use an arbitrary algorithm as a loss function for a network?

I has been trying to understand this machine learning problem for many days now and it really confuses me, I need some help. I am trying to train a neural network whose input is an image, and which generates another image as output (it is not a very large image, it is 8×8 pixels). And I have an arbitrary fancy_algorithm() “black box” function that receives the input and prediction of the network (the two images) and outputs a float number that tells how good the output of the network was (calculates a loss). My problem is that I want to

Missing module tensorflow on iPython azure machine learning (Classic)

Yesterday I have install tensorflow module from iPython notebook from Azure machine learning studio (classic) version. The import worked well after installing the module using (!pip install tensorflow). But today when tried to import this module got this “missing module” error and when I tried reinstalling the module it works well. Am I missing anything here? Do I need to install the module each and everyday, before using it? Can someone please explain? Answer For Azure Machine Learning (Classic) Studio notebooks, you need to install Tensorflow. Furthermore, the notebook server session times out after a period of inactivity, hence, you

Are there IEC 61131 / IEC 61499 PLC function blocks that use OPA UA to transport data?

I have a machine learning and advanced control application in Python (TensorFlow + Gekko) that I need to integrate with a Programmable Logic Controller (PLC) that provides the data acquisition and final element control. Can I use a rack-mounted Linux (preferred) or Windows Server as the computational engine, with data transport through OPC-UA (OLE for Process Control – Universal Architecture)? There is a Python OPC-UA / IEC 62541 Client (and Server) and a Python MODBUS package that I’ve used on other projects when connecting to Distributed Control Systems (DCS) such as Emerson DeltaV, Honeywell Experion/TDC3000, and Yokogawa DCS. Can I

Why is TensorFlow 2 much slower than TensorFlow 1?

It’s been cited by many users as the reason for switching to Pytorch, but I’ve yet to find a justification / explanation for sacrificing the most important practical quality, speed, for eager execution. Below is code benchmarking performance, TF1 vs. TF2 – with TF1 running anywhere from 47% to 276% faster. My question is: what is it, at the graph or hardware level, that yields such a significant slowdown? Looking for a detailed answer – am already familiar with broad concepts. Relevant Git Specs: CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10, GTX 1070 Benchmark results: UPDATE: Disabling Eager Execution

How to load models trained on GPU into CPU (system) memory?

I trained a model on a GPU and now I am trying to evaluate it on CPU (GPU is being used for a different train run). However when I try to load it using: I am getting a CUDA_ERROR_OUT_OF_MEMORY: (I tried also setting compile=True with the same result.) It seems that the model is being loaded into GPU which is already used by another instance. How can I force keras/tensorflow to load it into system memory and execute it on CPU? Answer Can you define everything inside with tf.device(‘/cpu:0′): except the libraries import part and test. If that doesn’t work

Concatenate two models with tensorflow.keras

I’m currently studying neural network models for image analysis, with the MNIST dataset. I first used only the image to build a first model. Then I created a additionnal variable, which is : 0 when the digit is actually between 0 and 4, and 1 when it’s greater or equal than 5. Therefore, I want to build a model that can take these two informations : the image of the digit, and that additionnal variable I juste created. I created the two first models, one for the image and one for the exogenous variable, as follow : Then I would