Skip to content

Commit d269d12

Browse files
committed
minor update
1 parent 6d06666 commit d269d12

File tree

1 file changed

+22
-4
lines changed

1 file changed

+22
-4
lines changed

public/OpenGO1/index.html

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -263,17 +263,35 @@ <h4 class="text-white w-full max-w-wide leading-loose ">
263263
Cross-Embodiment Transfer: Do We Really Need Multi-Embodiment Pre-Training Data?
264264
</h4>
265265

266-
<figure class="w-full flex flex-col laptop:flex-row justify-center gap-12 laptop:gap-6 max-w-wide mt-3 mb-3 laptop:mt-6 laptop:mb-6", style="max-width: 1152px;">
267-
<video class="h-full object-cover bg-[url('/OpenGO1/franka_fold.png')] bg-center bg-cover bg-black bg-opacity-30 bg-blend-overlay" style="width: 30%; max-width: 30%;" preload="none" autoplay loop muted>
266+
<style>
267+
@media (min-width: 1024px) {
268+
.video-30 {
269+
width: 30% !important;
270+
max-width: 30% !important;
271+
}
272+
}
273+
</style>
274+
275+
<figure class="w-full flex flex-col laptop:flex-row justify-center gap-4 laptop:gap-6 max-w-wide mt-3 mb-3 laptop:mt-6 laptop:mb-6 mx-auto" style="max-width: 1152px;">
276+
<video class="video-30 w-full h-auto laptop:h-full object-cover bg-[url('/OpenGO1/franka_fold.png')] bg-center bg-cover bg-black bg-opacity-30 bg-blend-overlay" preload="none" autoplay loop muted>
268277
<source src="https://opendrivelab.github.io/OpenGO1/franka_fold.mp4"/>
269278
</video>
270-
<video class="h-full object-cover bg-[url('/OpenGO1/g1_fold.png')] bg-center bg-cover bg-black bg-opacity-30 bg-blend-overlay" style="width: 30%; max-width: 30%;" preload="none" autoplay loop muted>
279+
<video class="video-30 w-full h-auto laptop:h-full object-cover bg-[url('/OpenGO1/g1_fold.png')] bg-center bg-cover bg-black bg-opacity-30 bg-blend-overlay" preload="none" autoplay loop muted>
271280
<source src="https://opendrivelab.github.io/OpenGO1/g1_fold.mp4"/>
272281
</video>
273-
<video class="h-full object-cover bg-[url('/OpenGO1/agilex_fold.png')] bg-center bg-cover bg-black bg-opacity-30 bg-blend-overlay" style="width: 30%; max-width: 30%;" preload="none" autoplay loop muted>
282+
<video class="video-30 w-full h-auto laptop:h-full object-cover bg-[url('/OpenGO1/agilex_fold.png')] bg-center bg-cover bg-black bg-opacity-30 bg-blend-overlay" preload="none" autoplay loop muted>
274283
<source src="https://opendrivelab.github.io/OpenGO1/agilex_fold.mp4"/>
275284
</video>
276285
</figure>
286+
287+
288+
289+
290+
291+
292+
293+
294+
277295

278296
<p class="text-white w-full max-w-wide leading-loose">
279297
Building truly foundational robotic models means they should work across different robot bodies, but do we really need to train on multiple robot types from the start? The robotics community typically assumes yes, but we had a different hypothesis. Since robots with different shapes can often produce similar behaviors when their end-effectors follow the same path in space, we wonder: what if a model trained on just one robot could easily transfer to others? To test this, we train our model exclusively on the AgiBot G1 and then evaluate it across various simulated and real-world platforms. The results are remarkable, our <b class="text-white font-black">single-embodiment pretrained model adapts well</b> and even shows better scaling properties during fine-tuning than models pre-trained on diverse robot datasets. The videos below demonstrate this with one of robotics' most challenging tasks: cloth folding. Using fewer than 200 folding demonstrations, our pre-trained model successfully transfers to entirely new robot embodiments, proving that single-embodiment pre-training can be a viable path to cross-embodiment generalization without the complexity of multi-robot training.

0 commit comments

Comments
 (0)