Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usage question #28

Open
AdeelH opened this issue May 31, 2024 · 0 comments
Open

Usage question #28

AdeelH opened this issue May 31, 2024 · 0 comments

Comments

@AdeelH
Copy link

AdeelH commented May 31, 2024

I am interested in evaluating the text2earth model for text-to-image retrieval and want to compare it to CLIP-based models.

My assumption was that text2earth is a text encoder that encodes text to the same space as the Clay image embeddings. I had assumed that I could do the following:

  1. Use the Clay v1 model to create embeddings for some chips
  2. Find a text2earth model compatible with the v1 model
  3. Use it to embed natural language text queries like "running track", "house with swimming pool" etc.
  4. Compute similarity scores between the text embedding and the chip embeddings

But I am a little confused by the example notebooks (such as this one).

Questions:

  • Is the workflow described above currently supported?
  • Is there a text2earth model compatible with the v1 model?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant