On the client side, I am sending a blob audio (wav) file. On the server side, I am trying to convert the blob file to an audio wav file. I did the following:
blob = request.FILES['file'] name = "TEST.wav" audio = wave.open(name, 'wb') audio.setnchannels(1) audio.writeframes(blob.read())
I thought that converting the blob would be similar to converting a blob image to a jpeg file, but was very incorrect in that assumption. That didn’t work; I get an error – “Error: sample width not specified.” I then used setsampwidth() and tossed in an arbitrary number between 1 and 4 (after looking at the wave.py source file…I don’t know why the bytes have to be between 1 and 4). After that another error is thrown – “Error: sampling rate not specified.” How do I specify the sampling rate?
What does the setnchannels(), setsampwidth() methods do? Is there an “easy” way I generate the wav file from the blob?
Advertisement
Answer
Previously, I never do it before.. but, in my test this script below is worked well for me.. (But the audio output isn’t same like original file).
>>> nchannels = 2 >>> sampwidth = 2 >>> framerate = 8000 >>> nframes = 100 >>> >>> import wave >>> >>> name = 'output.wav' >>> audio = wave.open(name, 'wb') >>> audio.setnchannels(nchannels) >>> audio.setsampwidth(sampwidth) >>> audio.setframerate(framerate) >>> audio.setnframes(nframes) >>> >>> blob = open("original.wav").read() # such as `blob.read()` >>> audio.writeframes(blob) >>>
I found this method at https://stackoverflow.com/a/3637480/6396981
Finally, by changing the value of nchannels
and sampwidth
with 1
. and I got an audio that same with original file.
nchannels = 1 sampwidth = 1 framerate = 8000 nframes = 1
Tested under Python2, and got an error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x95 in position 4: invalid start byte
on Python3.