I’m trying to visualize the embeddings in tensorboard but the projector tab isn’t showing anything on colab. When I downloaded the logs folder to my pc and than ran it locally, it worked perfectly fine. Does anybody have any idea why it isn’t working in google colab ? The command i’m using to show the tensorboard: %tensorboard –logdir tmp/ Output:
Tag: tensorflow
nothing provides __cuda needed by tensorflow-2.10.0-cuda112py310he87a039_0
I’m using mambaforge on WSL2 Ubuntu 22.04 with systemd enabled. I’m trying to install TensorFlow 2.10 with CUDA enabled, by using the command: And the command nvidia-smi -q from WSL2 gives: And my other enviroment works as expected: Then, it tries to install package version cuda112py39h9333c2f_1, winch uses Python 3.9, but I want Python 3.10. Whenever I try to install
AttributeError: module ‘keras.preprocessing.image’ has no attribute ‘img_to_array’
I have added following libraries and half part of the code is executing. In second half I get this error. Libraries added: Error: AttributeError: module ‘keras.preprocessing.image’ has no attribute ‘img_to_array’ I was following this code and changes the libraries too still can’t resolve the issue. https://www.analyticsvidhya.com/blog/2021/06/k-means-clustering-and-transfer-learning-for-image-classification/ Answer It has now moved to tf.keras.utils.img_to_array. See the docs
List comprehension in keras custom loss function
I want to make my custom loss function. First, the model’s output shape is (None, 7, 3). So I want split the output to 3 lists. But I got an error as follows: I think upper_b_true = [m[0] for m in y_true] is not supported. I don’t know how to address this problem. I tried to execute it while partially
Output tensors of a Functional model must be the output of a TensorFlow `Layer`
So I’m trying to expand the Roberta Pretrained Model and I was doing a basic model for testing but I’m getting this error from TensorFlow: ValueError: Output tensors of a Functional model must be the output of a TensorFlow Layer. which is from the Model api of Keras but I don’t exactly know what’s causing it. Code: Full error traceback:
Save keras preprocessing layer
I have a model where I’m doing different preprocessing, outside the model itself. One part of the preprocessing is using a category encoder based on keras with: I apply this than with to my pandas dataframe. Now I want to store my model and in order to store the model I also have to store the 2 preprocessing layers cat_index
How to make a custom gradient for a multiple output function?
I would like to know how to write a custom gradient for a function which have multiple outputs( or an array). For a simple example, I wrote the following code for y=tan( x @ w + b) with x shape is (2,3) and y shape is (2,2). To compare results, I calculated the operation by usual way and by the
Is passing activity_regularizer as argument to Conv2D() the same as passing it seperately right after Conv2D()? (Tensorflow)
I was wondering whether creating the model by passing activity_regularizer=’l1_l2′ as an argument to Conv2D() will mathematically make a difference to creating the model by adding model.add(ActivityRegularization(l1=…, l2=…)) seperately? For me, it is hard to tell, as training always involves some randomness. But the results seem similar. One additional question I have is: I accidentally passed the activity_regularizer=’l1_l2′ argument to
Removal of data /indices values(range) from a tensor tensorflow
Consider the following tensor The output of the above tensor is Now I want to remove, lets say the first value , i.e., 1.3, remove values from indices starting from 4 to 6 and from value 0.25 onwards [12:] The output shall be Can it be done? Thanks in advance Answer Sure, have a look at tensor slicing. In your
Obtaining the parameters of layers after concatenation in Keras
I’m trying to get the output and input parameters after concatenation in keras, more specifically in “concat_” and “hidden 6” layers. Is there way to obtain the parameters by layer name? Also, is there any way to run the model (after training) until the concatenation point? Answer You could give each layer that you want to later retrieve, a specific