Hi!
I’ve been following the indications found here: Setting Up ExecuTorch — ExecuTorch 0.5 documentation
(executorch) raphy@raohy:~/executorch$ ./cmake-out/executor_runner --model_path ../example_files/add.pte
I 00:00:00.000294 executorch:executor_runner.cpp:82] Model file ../example_files/add.pte is loaded.
I 00:00:00.000308 executorch:executor_runner.cpp:91] Using method forward
I 00:00:00.000317 executorch:executor_runner.cpp:138] Setting up planned buffer 0, size 48.
I 00:00:00.000350 executorch:executor_runner.cpp:161] Method loaded.
I 00:00:00.000369 executorch:executor_runner.cpp:171] Inputs prepared.
I 00:00:00.000395 executorch:executor_runner.cpp:180] Model executed successfully.
I 00:00:00.000399 executorch:executor_runner.cpp:184] 1 outputs:
Output 0: tensor(sizes=[1], [2.])
But, now, I want to convert, and then, execute with ExecuteTorch, the following fine-tuned model : Fine-Tuning-BERT-for-Named-Entity-Recognition/BERTfineTunningFinal.ipynb at main · tozameerkhan/Fine-Tuning-BERT-for-Named-Entity-Recognition · GitHub .
I’ve already trained for fine-tuning the model, and saved the fine-tuned model as safetensors :
(.bftner) (base) raphy@raohy:~/BertFineTuningForNERPyTorch$ ls -lah
total 132K
drwxrwxr-x 7 raphy raphy 4,0K feb 19 14:49 .
drwxr-x--- 156 raphy raphy 12K feb 19 13:31 ..
-rw-rw-r-- 1 raphy raphy 5,3K feb 18 11:50 '=0.26.0'
-rw-rw-r-- 1 raphy raphy 8,3K feb 19 14:35 BERT-NER-ExportableToExecuteTorch.py
-rw-rw-r-- 1 raphy raphy 8,2K feb 18 15:42 BERT-NER.py
drwxrwxr-x 6 raphy raphy 4,0K feb 18 11:41 .bftner
drwxrwxr-x 2 raphy raphy 4,0K feb 18 14:47 ner_model
drwxrwxr-x 5 raphy raphy 4,0K feb 18 17:14 results
drwxrwxr-x 2 raphy raphy 4,0K feb 18 14:47 tokenizer
(.bftner) (base) raphy@raohy:~/BertFineTuningForNERPyTorch$
(.bftner) (base) raphy@raohy:~/BertFineTuningForNERPyTorch$ ls -lah ./ner_model/
total 416M
drwxrwxr-x 2 raphy raphy 4,0K feb 18 14:47 .
drwxrwxr-x 7 raphy raphy 4,0K feb 19 15:24 ..
-rw-rw-r-- 1 raphy raphy 896 feb 18 18:47 config.json
-rw-rw-r-- 1 raphy raphy 416M feb 18 18:47 model.safetensors
(.bftner) (base) raphy@raohy:~/BertFineTuningForNERPyTorch$ ls -lah ./results/
total 20K
drwxrwxr-x 5 raphy raphy 4,0K feb 18 17:14 .
drwxrwxr-x 7 raphy raphy 4,0K feb 19 15:24 ..
drwxrwxr-x 2 raphy raphy 4,0K feb 18 17:14 checkpoint-1756
drwxrwxr-x 2 raphy raphy 4,0K feb 18 14:03 checkpoint-2634
drwxrwxr-x 2 raphy raphy 4,0K feb 18 14:47 checkpoint-3512
(.bftner) (base) raphy@raohy:~/BertFineTuningForNERPyTorch$ ls -lah ./tokenizer/
total 940K
drwxrwxr-x 2 raphy raphy 4,0K feb 18 14:47 .
drwxrwxr-x 7 raphy raphy 4,0K feb 19 15:24 ..
-rw-rw-r-- 1 raphy raphy 125 feb 18 18:47 special_tokens_map.json
-rw-rw-r-- 1 raphy raphy 1,2K feb 18 18:47 tokenizer_config.json
-rw-rw-r-- 1 raphy raphy 695K feb 18 18:47 tokenizer.json
-rw-rw-r-- 1 raphy raphy 227K feb 18 18:47 vocab.txt
I’m confused and lost. How should I proceed now in order to be able to convert this fine-tuned model into a model executable by ExecuTorch in a desktop environment?