Skip to content

Quickstart rustfs update#2569

Open
leekeiabstraction wants to merge 5 commits intoapache:mainfrom
leekeiabstraction:quickstart-rustfs-update
Open

Quickstart rustfs update#2569
leekeiabstraction wants to merge 5 commits intoapache:mainfrom
leekeiabstraction:quickstart-rustfs-update

Conversation

@leekeiabstraction
Copy link
Contributor

Purpose

Linked issue: close #2495

Brief change log

  • Add RustFs container in Flink Quickstart
  • Add step to create bucket in RustFs
  • Add step to verify kv snapshot in RustFs

Tests

Manually ran through and verified steps with 0.8.0-incubating artefacts and

@leekeiabstraction
Copy link
Contributor Author

@wuchong I've updated the Flink Quickstart page. LMK if we also want to do the same for Lakehouse Quickstart page.

@leekeiabstraction
Copy link
Contributor Author

I have attempted to update the Lakehouse Quickstart doc as well. However I run into the following error pointing towards missing hadoop-aws artefact depended by Paimon:

Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
	at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2273) ~[?:?]
	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2367) ~[?:?]
	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2793) ~[?:?]
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) ~[?:?]
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) ~[?:?]
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) ~[?:?]
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) ~[?:?]
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) ~[?:?]
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) ~[?:?]
	at org.apache.paimon.fs.hadoop.HadoopFileIO.createFileSystem(HadoopFileIO.java:206) ~[?:?]
	at org.apache.paimon.fs.hadoop.HadoopFileIO.getFileSystem(HadoopFileIO.java:198) ~[?:?]
	at org.apache.paimon.fs.hadoop.HadoopFileIO.getFileSystem(HadoopFileIO.java:175) ~[?:?]
	at org.apache.paimon.fs.hadoop.HadoopFileIO.exists(HadoopFileIO.java:139) ~[?:?]
	at org.apache.paimon.fs.FileIO.checkAccess(FileIO.java:618) ~[?:?]
	at org.apache.paimon.fs.FileIO.get(FileIO.java:544) ~[?:?]
	at org.apache.paimon.catalog.CatalogFactory.createUnwrappedCatalog(CatalogFactory.java:97) ~[?:?]
	at org.apache.paimon.catalog.CatalogFactory.createCatalog(CatalogFactory.java:71) ~[?:?]
	at org.apache.paimon.catalog.CatalogFactory.createCatalog(CatalogFactory.java:67) ~[?:?]
	at org.apache.fluss.lake.paimon.PaimonLakeCatalog.<init>(PaimonLakeCatalog.java:79) ~[?:?]
	at org.apache.fluss.lake.paimon.PaimonLakeStorage.createLakeCatalog(PaimonLakeStorage.java:47) ~[?:?]
	at org.apache.fluss.lake.paimon.PaimonLakeStorage.createLakeCatalog(PaimonLakeStorage.java:32) ~[?:?]
	at org.apache.fluss.lake.lakestorage.PluginLakeStorageWrapper$ClassLoaderFixingLakeStorage.createLakeCatalog(PluginLakeStorageWrapper.java:130) ~[fluss-server-0.9-SNAPSHOT.jar:0.9-SNAPSHOT]
	at org.apache.fluss.server.coordinator.LakeCatalogDynamicLoader.createLakeCatalog(LakeCatalogDynamicLoader.java:119) ~[fluss-server-0.9-SNAPSHOT.jar:0.9-SNAPSHOT]
	at org.apache.fluss.server.coordinator.LakeCatalogDynamicLoader$LakeCatalogContainer.<init>(LakeCatalogDynamicLoader.java:135) ~[fluss-server-0.9-SNAPSHOT.jar:0.9-SNAPSHOT]
	at org.apache.fluss.server.coordinator.LakeCatalogDynamicLoader.<init>(LakeCatalogDynamicLoader.java:55) ~[fluss-server-0.9-SNAPSHOT.jar:0.9-SNAPSHOT]
	at org.apache.fluss.server.coordinator.CoordinatorServer.startServices(CoordinatorServer.java:175) ~[fluss-server-0.9-SNAPSHOT.jar:0.9-SNAPSHOT]
	at org.apache.fluss.server.ServerBase.start(ServerBase.java:131) ~[fluss-server-0.9-SNAPSHOT.jar:0.9-SNAPSHOT]

Might need some clarification here, should I just update Lakehouse Quickstart instructions so that these are added or should I update the Fluss image's Dockerfile to include these dependencies?

@wuchong
Copy link
Member

wuchong commented Feb 5, 2026

@leekeiabstraction yes, I think we need to include hadoop-aws if using rustfs, because prevous the quickstart uses local filesystem which doesn't require this jar.

@wuchong
Copy link
Member

wuchong commented Feb 5, 2026

@luoyuxia could you also help to have another review?

@luoyuxia
Copy link
Contributor

luoyuxia commented Feb 5, 2026

@luoyuxia could you also help to have another review?

Sure, I'll have another review today.

Copy link
Contributor

@luoyuxia luoyuxia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leekeiabstraction Thanks for the pull request. Left minor comment. PTAL

@leekeiabstraction
Copy link
Contributor Author

@wuchong @luoyuxia Comments are addressed, I've also retested the new guide and it worked. PTAL.

I'll leave the Lakehouse Quickstart atm as @luoyuxia is working on updating the Paimon example on a separate PR: #2576

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update Quickstart Demo to Use S3 (via RustFS) Instead of Local File System

3 participants