Skip to content

Commit a541f11

Browse files
Merge pull request #22 from eclipsevortex/release/1.1.2
Fix migration mode (#21)
2 parents 5085da7 + 8de718a commit a541f11

File tree

9 files changed

+207
-49
lines changed

9 files changed

+207
-49
lines changed

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
1.1.1
1+
1.1.2

scripts/miner/README.md

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ This guide explains how to **install** and **uninstall** manually the Miner.
2525
- [Available Components](#available-components)
2626
- [Monitoring & Logs](#monitoring-and-logs)
2727
- [Quick Restart](#quick-restart)
28+
- [View Scores on Wandb](#neuron-wandb-scores)
2829
- [Per-Component](#per-component)
2930
- [Redis](#redis)
3031
- [Metagraph](#metagraph)
@@ -163,6 +164,25 @@ It will restart the Miner's components using the `EXECUTION_METHOD`, which defau
163164

164165
<br />
165166

167+
# 🔬 View Scores on Wandb <a id="neuron-wandb-scores"></a>
168+
169+
In addition to checking local logs, you can also view your Neuron’s challenge scores and activity in the **Wandb dashboards** maintained by the validators.
170+
171+
## 🛰️ Validator Dashboard Table
172+
173+
Browse the full list of validators and their submissions here:
174+
👉 [SubVortex Validator Table on Wandb](https://wandb.ai/eclipsevortext/subvortex-team/table?nw=nwusereclipsevortext)
175+
176+
## 🔍 View Detailed Validator Run
177+
178+
To dive into individual challenge sessions, open a specific validator’s run:
179+
👉 [Example Validator Run](https://wandb.ai/eclipsevortext/subvortex-team/runs/ewiaytb4?nw=nwuser0xtkd)
180+
181+
> 💡 Scores in Wandb are updated when validators successfully receive, evaluate, and upload challenge results.
182+
> If your UID doesn't appear, make sure your node is reachable and correctly emitting events.
183+
184+
<br />
185+
166186
# Per-Component <a id="per-component"></a>
167187

168188
## Redis <a id="redis"></a>
@@ -334,3 +354,96 @@ To uninstall Neuron for the Miner:
334354

335355
Need help or want to chat with other SubVortex users?
336356
Join us on [Discord](https://discord.gg/bittensor)!
357+
358+
### ✅ Connectivity Checklist <a id="neuron-connectivity-check"></a>
359+
360+
After installing and starting the Neuron, it's essential to verify that your Miner is **externally reachable**. Validators need to connect to both your **Miner** and your **Subtensor** node to send challenges and record results.
361+
362+
#### 🔌 1. Check Miner Port Accessibility
363+
364+
Verify that port `8091` (used for challenge handling) is accessible from the public internet.
365+
366+
From a **remote machine** (not the miner host), run:
367+
368+
```bash
369+
nc -zv <YOUR_MINER_PUBLIC_IP> 8091
370+
```
371+
372+
✅ Expected output:
373+
374+
```
375+
Connection to <YOUR_MINER_PUBLIC_IP> port 8091 [tcp/*] succeeded!
376+
```
377+
378+
> ⚠️ If this fails, check for:
379+
>
380+
> - Blocked ports in `ufw`, `iptables`, or cloud security groups
381+
> - NAT/router not forwarding the port correctly
382+
> - Misconfigured HAProxy or service not running
383+
384+
---
385+
386+
#### 🔌 2. Check Subtensor Node WebSocket Accessibility
387+
388+
Make sure your Subtensor node exposes a WebSocket at port `9944`.
389+
390+
Run this from an external machine:
391+
392+
```bash
393+
wscat -c ws://<YOUR_SUBTENSOR_PUBLIC_IP>:9944
394+
```
395+
396+
✅ Expected output:
397+
398+
```
399+
Connected (press CTRL+C to quit)
400+
>
401+
```
402+
403+
> 📦 You can install `wscat` via:
404+
>
405+
> ```bash
406+
> npm install -g wscat
407+
> ```
408+
409+
---
410+
411+
#### 📡 3. Confirm the Neuron is Receiving Challenges
412+
413+
After startup, check your logs to confirm that scores are reaching your neuron:
414+
415+
- `service` (systemd)
416+
417+
```bash
418+
tail -f /var/log/subvortex-miner/subvortex-miner-neuron.log | grep Score
419+
```
420+
421+
- `process` (PM2)
422+
423+
```bash
424+
pm2 log subvortex-miner-neuron | grep Score
425+
```
426+
427+
- `container` (Docker)
428+
429+
```bash
430+
docker logs subvortex-miner-neuron -f | grep Score
431+
```
432+
433+
Look for lines like:
434+
435+
```
436+
247|subvortex-miner-neuron | 2025-05-24 21:36:12.185 | INFO | [20] Availability score 1.0
437+
247|subvortex-miner-neuron | 2025-05-24 21:36:12.185 | INFO | [20] Latency score 1.0
438+
247|subvortex-miner-neuron | 2025-05-24 21:36:12.185 | INFO | [20] Reliability score 0.8558733500690666
439+
247|subvortex-miner-neuron | 2025-05-24 21:36:12.185 | INFO | [20] Distribution score 1.0
440+
247|subvortex-miner-neuron | 2025-05-24 21:36:12.185 | SUCCESS | [20] Score 0.9711746700138133
441+
```
442+
443+
---
444+
445+
If you're not receiving challenges:
446+
447+
- ✅ Double-check that Metagraph and Redis are correctly synced.
448+
- ✅ Confirm your neuron is registered and emitting correctly on-chain.
449+
- ✅ Verify your challenge port and WebSocket endpoint are **publicly reachable**.

subvortex/auto_upgrader/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
44

55
[project]
66
name = "subvortex-auto-upgrader"
7-
version = "1.1.1"
7+
version = "1.1.2"
88
description = "SubVortex Auto Upgrader"
99
authors = [{ name = "Eclipse Vortex", email = "subvortex.bt@gmail.com" }]
1010
readme = "README.md"

subvortex/auto_upgrader/src/migrations/redis_migrations.py

Lines changed: 50 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -139,25 +139,25 @@ async def apply(self):
139139
prefix=sauc.SV_LOGGER_NAME,
140140
)
141141

142-
# Determine the highest revions
143-
highest_revision = (
142+
# Determine the highest revision
143+
next_revision = (
144144
sorted(new_revisions, key=lambda v: Version(v))[-1]
145-
if len(new_revisions) > 0
145+
if new_revisions
146146
else "0.0.0"
147147
)
148148

149149
revision = current_version
150-
if Version(current_version) < Version(highest_revision):
150+
if Version(current_version) < Version(next_revision):
151151
revision = await self._upgrade(
152152
database=database,
153153
revisions=new_revisions,
154154
current_version=current_version,
155155
)
156-
elif Version(current_version) > Version(highest_revision):
156+
elif Version(current_version) > Version(next_revision):
157157
revision = await self._downgrade(
158158
database=database,
159159
revisions=old_revisions,
160-
current_version=highest_revision,
160+
next_version=next_revision,
161161
)
162162
else:
163163
btul.logging.info(
@@ -198,6 +198,9 @@ async def rollback(self):
198198
prefix=sauc.SV_LOGGER_NAME,
199199
)
200200

201+
# Final rollback target is the down_revision of the last applied revision
202+
final_version = self.graph.get(self.applied_revisions[0]) or "0.0.0"
203+
201204
try:
202205
# Rollback in reverse order
203206
for rev in reversed(self.applied_revisions):
@@ -213,28 +216,24 @@ async def rollback(self):
213216
await database.set(f"migration_mode:{rev}", "dual")
214217

215218
btul.logging.trace(
216-
f"[Rev {rev}] Executing rollback step", prefix=sauc.SV_LOGGER_NAME
219+
f"[Rev {rev}] Executing rollback step",
220+
prefix=sauc.SV_LOGGER_NAME,
217221
)
218222
await self.modules[rev].rollback(database)
219223

220-
# Get the parent revision
221-
parent_version = self.graph.get(rev) or "0.0.0"
222-
223224
btul.logging.trace(
224-
f"[Rev {rev}] Finalizing migration — setting mode to 'new' and version to '{parent_version}'",
225+
f"[Rev {rev}] Deleting migration_mode:{rev}",
225226
prefix=sauc.SV_LOGGER_NAME,
226227
)
227-
if parent_version:
228-
await database.set("version", parent_version)
229-
await database.set(f"migration_mode:{parent_version}", "legacy")
230-
else:
231-
await database.set("version", parent_version)
228+
await database.delete(f"migration_mode:{rev}")
232229

233-
# Clear applied revisions after rollback
234230
self.applied_revisions.clear()
235231

236232
finally:
237233
if database:
234+
# Set final rollback version and its migration_mode
235+
await database.set("version", final_version)
236+
await database.set(f"migration_mode:{final_version}", "new")
238237
await database.close()
239238

240239
async def _upgrade(self, database, revisions, current_version):
@@ -244,66 +243,85 @@ async def _upgrade(self, database, revisions, current_version):
244243
)
245244

246245
# Sort correctly
247-
started = current_version == "0.0.0"
246+
skip = current_version == "0.0.0"
247+
previous_rev = current_version
248248
for rev in sorted(revisions, key=lambda v: Version(v)):
249-
if not started:
250-
if rev == current_version:
251-
started = True
249+
if not skip:
250+
if Version(rev) <= Version(current_version):
251+
continue
252252

253-
continue # skip until current_version is reached
253+
skip = False
254254

255255
btul.logging.info(
256256
f"⬆️ Applying migration for {self.service_name}: {rev}",
257257
prefix=sauc.SV_LOGGER_NAME,
258258
)
259259

260+
# Flag the version as dual
260261
btul.logging.trace(
261262
f"[Rev {rev}] Setting migration mode: dual",
262263
prefix=sauc.SV_LOGGER_NAME,
263264
)
264265
await database.set(f"migration_mode:{rev}", "dual")
265266

267+
# Rollout the version
266268
btul.logging.trace(
267269
f"[Rev {rev}] Executing rollout step", prefix=sauc.SV_LOGGER_NAME
268270
)
269271
await self.modules[rev].rollout(database)
270272

273+
# Flag the version as new
271274
btul.logging.trace(
272275
f"[Rev {rev}] Finalizing migration — setting mode to 'new' and version to '{rev}'",
273276
prefix=sauc.SV_LOGGER_NAME,
274277
)
275278
await database.set("version", rev)
276279
await database.set(f"migration_mode:{rev}", "new")
277280

281+
# Remove the previous version
282+
if previous_rev:
283+
await database.delete(f"migration_mode:{previous_rev}")
284+
285+
previous_rev = rev
278286
self.applied_revisions.append(rev)
279287

280288
return rev or "0.0.0"
281289

282-
async def _downgrade(self, database, revisions, current_version):
290+
async def _downgrade(self, database, revisions, next_version):
283291
btul.logging.info(
284292
f"⬇️ Running downgrade migrations for {self.service_name}...",
285293
prefix=sauc.SV_LOGGER_NAME,
286294
)
287295

288-
parent_version = "0.0.0"
289296
for rev in sorted(revisions, key=lambda v: Version(v), reverse=True):
290-
if Version(rev) <= Version(current_version):
297+
if Version(rev) <= Version(next_version):
291298
break
292-
293299
btul.logging.info(
294300
f"⬇️ Rolling back migration for {self.service_name}: {rev}",
295301
prefix=sauc.SV_LOGGER_NAME,
296302
)
303+
304+
# Flag the version as dual before rollback
305+
btul.logging.trace(
306+
f"[Rev {rev}] Setting migration mode: dual",
307+
prefix=sauc.SV_LOGGER_NAME,
308+
)
297309
await database.set(f"migration_mode:{rev}", "dual")
310+
311+
# Execute the rollback step
312+
btul.logging.trace(
313+
f"[Rev {rev}] Executing rollback step",
314+
prefix=sauc.SV_LOGGER_NAME,
315+
)
298316
await self.modules[rev].rollback(database)
299317

300-
parent_version = self.graph.get(rev)
301-
if parent_version:
302-
await database.set("version", parent_version)
303-
await database.set(f"migration_mode:{parent_version}", "legacy")
304-
else:
305-
await database.set("version", "0.0.0")
318+
# Set parent version if available
319+
parent_version = self.graph.get(rev, "0.0.0") or "0.0.0"
320+
await database.set("version", parent_version)
321+
await database.set(f"migration_mode:{parent_version}", "new")
322+
await database.delete(f"migration_mode:{rev}")
306323

324+
# Track successful rollback
307325
self.applied_revisions.append(rev)
308326

309327
return parent_version

subvortex/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "1.1.1"
1+
__version__ = "1.1.2"

tests/unit_tests/auto_upgrader/test_container_orchestrator.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,7 @@ def orchestrator():
7979
# Mock internal steps in run_plan
8080
# orch._get_current_version = mock.MagicMock()
8181
# orch._get_latest_version = mock.MagicMock()
82+
orch._switch_services = mock.MagicMock()
8283
orch._pull_current_assets = mock.MagicMock()
8384
orch._pull_latest_assets = mock.MagicMock()
8485
orch._load_current_services = mock.MagicMock()
@@ -442,7 +443,7 @@ async def test_run_plan_when_no_new_version_should_execute_until_check_versions_
442443

443444

444445
@pytest.mark.asyncio
445-
async def test_run_plan_when_migrate_for_the_first_time_after_releasing_auto_upragder(
446+
async def test_run_plan_when_migrate_for_the_first_time_after_releasing_auto_upragder2(
446447
orchestrator,
447448
):
448449
# Arrange

tests/unit_tests/auto_upgrader/test_process_orchestrator.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,7 @@ def orchestrator():
7979
# Mock internal steps in run_plan
8080
# orch._get_current_version = mock.MagicMock()
8181
# orch._get_latest_version = mock.MagicMock()
82+
orch._switch_services = mock.MagicMock()
8283
orch._pull_current_assets = mock.MagicMock()
8384
orch._pull_latest_assets = mock.MagicMock()
8485
orch._load_current_services = mock.MagicMock()

0 commit comments

Comments
 (0)