Trying to convert a question-generation
t5 model to torchscript
model, while doing that Running into this error
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
here’s the code that I ran on colab.
!pip install -U transformers==3.0.0 !python -m nltk.downloader punkt from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model = AutoModelForSeq2SeqLM.from_pretrained('valhalla/t5-base-qg-hl') t_input = 'Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>' tokenizer = AutoTokenizer.from_pretrained('valhalla/t5-base-qg-hl', return_tensors = 'pt') def _tokenize( inputs, padding=True, truncation=True, add_special_tokens=True, max_length=64 ): inputs = tokenizer.batch_encode_plus( inputs, max_length=max_length, add_special_tokens=add_special_tokens, truncation=truncation, padding="max_length" if padding else False, pad_to_max_length=padding, return_tensors="pt" ) return inputs token = _tokenize(t_input, padding=True, truncation=True) traced_model = torch.jit.trace(model, [token['input_ids'], token['attention_mask']] ) torch.jit.save(traced_model, "traced_t5.pt")
got this error
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-1-f9b449524ef1> in <module>() 32 33 ---> 34 traced_model = torch.jit.trace(model, [token['input_ids'], token['attention_mask']] ) 35 torch.jit.save(traced_model, "traced_t5.pt") 7 frames /usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_value_states, use_cache, output_attentions, output_hidden_states) 682 else: 683 if self.is_decoder: --> 684 raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") 685 else: 686 raise ValueError("You have to specify either input_ids or inputs_embeds") ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
how to resolve this issue? or is there a better way for converting the t5 model to torchscript
.
thank you.
Advertisement
Answer
Update: refer to this answer and if you are exporting t5
to onnx
, it can be done easily using the fastT5
library.
I figured out what was causing the issue. Since the above model is sequential, it has both an encoder and a decoder. We need to pass the features into the encoder and labels (targets) into the decoder.
traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids, decoder_attention_mask) ) torch.jit.save(traced_model, "qg_model.pt")
the decoder_input_ids
is tokenized ids of the question (here the question is a label).
Even though the torchscript
model is created, it does not have the generate()
method as the huggingface’ t5 do.