We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
it would be great to add the new https://github.com/microsoft/markitdown in the benchmark, thanks! ☺️
The text was updated successfully, but these errors were encountered:
Hi, we tried to run the model infer for MarkItDown but got empty results. Please let us know if there are any issues in the infer code.
Here is the moder infer code:
from markitdown import MarkItDown import os img_folder = './OmniDocBench/images' save_path = './result0106/markitdown' md = MarkItDown() for img_name in os.listdir(img_folder): result = md.convert(os.path.join(img_folder, img_name)) response = result.text_content with open(os.path.join(save_path, img_name[:-4] + '.md'), 'w', encoding='utf-8') as output_file: output_file.write(response)
Sorry, something went wrong.
Hi @ouyanglinke I haven't got chance to try markitdown myself. It is quite new.
No branches or pull requests
it would be great to add the new https://github.com/microsoft/markitdown in the benchmark, thanks!☺️
The text was updated successfully, but these errors were encountered: