-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bringup tt-torch models in forge #1314
base: main
Are you sure you want to change the base?
Conversation
|
|
|
1 similar comment
|
66ff210
to
09430d9
Compare
|
|
|
|
899912e
to
823a81e
Compare
|
3 similar comments
|
|
|
823a81e
to
da83043
Compare
|
3 similar comments
|
|
|
e8ebdde
to
8b18209
Compare
8b18209
to
8a3aab6
Compare
|
|
|
|
|
||
|
||
def load_model(): | ||
model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-1b1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe pass the model_variant
as param?
|
||
def load_input(): | ||
test_input = "This is a sample text from " | ||
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b1", padding_side="left") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
|
||
|
||
def load_model(): | ||
model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-kitti") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass model_variant
as param.
def load_input(): | ||
url = "http://images.cocodataset.org/val2017/000000039769.jpg" | ||
image = Image.open(requests.get(url, stream=True).raw) | ||
processor = GLPNImageProcessor.from_pretrained("vinvino02/glpn-kitti") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass model_variant
as param.
|
||
|
||
def load_model(): | ||
model = MgpstrForSceneTextRecognition.from_pretrained("alibaba-damo/mgp-str-base") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass model_variant
as param.
def load_input(): | ||
url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png" | ||
image = Image.open(requests.get(url, stream=True).raw).convert("RGB") | ||
processor = MgpstrProcessor.from_pretrained("alibaba-damo/mgp-str-base") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass model_variant
as param.
|
||
|
||
def load_input(): | ||
url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we really take those images? What do others think?
|
||
|
||
def load_model(): | ||
model = AutoModelForImageSegmentation.from_pretrained("briaai/RMBG-2.0", trust_remote_code=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass model_variant
as param.
|
||
|
||
def load_model(): | ||
model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-tiny") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass model_variant
as param.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great Work @kamalrajkannan78 . Thanks for the PR. Left few review comments. Please address those comments.... :)
module_name = build_module_name( | ||
framework=Framework.PYTORCH, | ||
model="albert", | ||
task=Task.QA, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
variant missing?
module_name = build_module_name( | ||
framework=Framework.PYTORCH, | ||
model="albert", | ||
task=Task.SEQUENCE_CLASSIFICATION, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing variant?
variants = ["bert-large-cased-whole-word-masking-finetuned-squad", "phiyodr/bert-large-finetuned-squad2"] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused?
module_name = build_module_name( | ||
framework=Framework.PYTORCH, | ||
model="bloom", | ||
source=Source.HUGGINGFACE, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing variant? pls check all the model test files and add the variants
Summary
This PR addresses Issue #1321 by porting models from tt-torch to tt-forge. The current compilation status of these models are available in the logs.
A list of skipped models along with the reasons for their exclusion can be found in Issue #1339. Any models that were missed in this PR will be included in PR #1337.
Note:
Logs: