diff --git a/README.md b/README.md
index e84d02d448..a50313fa2c 100644
--- a/README.md
+++ b/README.md
@@ -1,16 +1,32 @@
-# Nerfies
-
-This is the repository that contains source code for the [Nerfies website](https://nerfies.github.io).
-
-If you find Nerfies useful for your work please cite:
-```
-@article{park2021nerfies
- author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
- title = {Nerfies: Deformable Neural Radiance Fields},
- journal = {ICCV},
- year = {2021},
-}
-```
-
-# Website License
- This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
+# V-skip
+
+
+
+---
+
+## 🚀 Introduction
+
+This repository contains the official implementation (and project page source) for the paper **"Chain-of-Thought Compression Should Not Be Blind: V-Skip for Efficient Multimodal Reasoning via Dual-Path Anchoring"**.
+
+**V-Skip** is a novel token pruning framework designed for Multimodal Large Language Models (MLLMs). It solves the **"Visual Amnesia"** problem found in standard text-centric compression methods. By employing a dual-path gating mechanism (Linguistic Surprisal + Visual Attention Flow), V-Skip preserves visually salient tokens while reducing latency.
+
+
+*Figure 1: Comparison of compression paradigms. V-Skip successfully rescues visual anchors (e.g., "red") that are blindly pruned by text-only methods.*
+
+## 📈 Key Results
+- **Speedup:** Achieves **2.9x** inference speedup on Qwen2-VL.
+- **Accuracy:** Outperforms baselines by over **30%** on the DocVQA benchmark.
+- **Robustness:** Effectively prevents object hallucination caused by over-pruning.
+
+## 🛠️ Usage
diff --git a/index.html b/index.html
index 373119fe36..396707a1e5 100644
--- a/index.html
+++ b/index.html
@@ -2,25 +2,11 @@
-
-
+
+
- Nerfies: Deformable Neural Radiance Fields
+ V-Skip: Chain-of-Thought Compression Should Not Be Blind
-
-
-
@@ -32,7 +18,7 @@
-
+
@@ -42,45 +28,40 @@
-