I’m loading images via I want to use the obtained data in non-tensorflow routines too. Therefore, I want to extract the data e.g. to numpy arrays. How can I achieve this? I can’t use tfds Answer I would suggest unbatching your dataset and using tf.data.Dataset.map: Or as suggested in the comments, you could also try just working with the batches
Tag: tensorflow-datasets
Tensorflow Datasets: Crop/Resize images per batch after dataset.batch()
Is it possible to Crop/Resize images per batch ? I’m using Tensorflow dataset API as below: I want, within the batch all the images should have the same size. However across the batches it can have different sizes. For example, 1st batch has all the images of shape (batch_size, 300, 300, 3). Next batch can have images of shape (batch_size,
tf.data: create a Dataset from a list of Numpy arrays of different shape
I have a list of Numpy arrays of different shape. I need to create a Dataset, so that each time an element is requested I get a tensor with the shape and values of the given Numpy array. How can I achieve this? This is NOT working: since you get, as expected: Can’t convert non-rectangular Python sequence to Tensor. p.s.
Reading Tensorflow Dataset changes bahaviour of `take()` and `skip()`
I am trying to inspect the labels inside my tensorflow dataset. However, the values of the labels change to something unexpected after using take() and skip(), depending on whether I inspect the data or not. (It looks like within the labels some ones changed to zeros.) I do not see any way that my inspection function could change the dataset.
Training a single model jointly over multiple datasets in tensorflow
I want to train a single variational autoencoder model or even a standard autoencoder over many datasets jointly (e.g. mnist, cifar, svhn, etc. where all the images in the datasets are resized to be the same input shape). Here is the VAE tutorial in tensorflow which I am using as a starting point: https://www.tensorflow.org/tutorials/generative/cvae. For training the model, I would
InvalidArgumentError: StringToNumberOp could not correctly convert string
I am trying to extract the labels from a file path of the form: The labels are 26, 0 and 3 in the file name first I create a list dataset: then I define a function that reads the image and gets the labels and use .map() on list_ds when I print some one of the labels as a sanity
Loading a large dataset from CSV files in TensorFlow
I use the following code to load a bunch of images in my data set in TensorFlow, which works well: I am wondering how I can use a similar code to load a bunch of CSV files. Each CSV file has a shape 256 x 256 and can be assumed as a grayscale image. I don’t know what I should
Tensorflow 2.3, Tensorflow dataset, TypeError: () takes 1 positional argument but 4 were given
I use tf.data.TextLineDataset to read 4 large files and I use tf.data.Dataset.zip to zip these 4 files and create “dataset”. However, I can not pass “dataset” to dataset.map to use tf.compat.v1.string_split and split with t separator and finally use batch, prefetch and finally feed into my model. This is my code: This is error message: What should I do? Answer
Split .tfrecords file into many .tfrecords files
Is there any way to split .tfrecords file into many .tfrecords files directly, without writing back each Dataset example ? Answer You can use a function like this: For example, to split the file my_records.tfrecord into parts of 100 records each, you would do: This would create multiple smaller record files my_records.tfrecord.000, my_records.tfrecord.001, etc.
How to use tf.data.Dataset.apply() for reshaping the dataset
I am working with time series models in tensorflow. My dataset contains physics signals. I need to divide this signals into windows as give this sliced windows as input to my model. Here is how I am reading the data and slicing it: I want to reshape this dataset to # {‘mix’: TensorShape([Dimension(32)]), ‘pure’: TensorShape([Dimension(32))} Equivalent transformation in numpy would