Skip to content
This repository has been archived by the owner on Dec 5, 2024. It is now read-only.

where are the extracted concepts stored in the VCC? #4

Open
quinnliu opened this issue Jan 16, 2021 · 0 comments
Open

where are the extracted concepts stored in the VCC? #4

quinnliu opened this issue Jan 16, 2021 · 0 comments

Comments

@quinnliu
Copy link

I've read the paper beyond imitation 0-shot task transfer on robots by learning concepts as cognitive programs && several of your other papers really enjoyed it.

I'm currently planning to replicate your results in https://github.com/ARISE-Initiative/robosuite to really understand what is going on by implementing the VCC in 2D in Python 3.7.4 && training it on 2D input && output examples so that general concepts can be extracted.

Then I plan to train it on 3D input && output examples to see if the VCC in 3D primitives can run the concept of 12 3D bricks into a 3D wall.

I'm confused as to how to train the VCC in 2D. The training examples file training_examples.pkl seems to be a minimalistic representation of a series of input && output images generated by using primitive_shapes.py && generating an unknown number of examples for each concept.

I'm also confused as to how the VCC EXTRACTS the concept representation exactly from these training examples.

Like are the input && output training examples all viewed by the vision hierarchy (VH) before the VCC is given novel input_scenes && specific nodes at the top of the vision hierarchy of neural-recursive cortical network (neural-RCN) represent these concepts such as:

  • move left-most object to the top
  • arrange green objects in a circle

Best,
Q

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant