feat(crates): add soar-operations for frontend-agnostic operations#157
feat(crates): add soar-operations for frontend-agnostic operations#157
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds a new workspace crate Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as Client
participant Ctx as SoarContext
participant MM as MetadataManager
participant DB as DieselDatabase
participant DL as Downloader
participant Installer as Installer
participant Events as EventSink
CLI->>Ctx: install request (pkg query)
Ctx->>MM: resolve metadata -> InstallTarget[]
MM-->>Ctx: InstallTarget[]
Ctx->>DB: acquire lock / check installed
DB-->>Ctx: lock result / installed state
Ctx->>DL: download artifact (progress callback)
DL-->>Events: DownloadProgress (via progress bridge)
DL-->>Ctx: artifact path
Ctx->>Installer: verify, extract, symlink, record metadata
Installer->>DB: write installation record
Installer-->>Events: InstallComplete
Ctx-->>CLI: InstallReport
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
06395fe to
0af1555
Compare
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@crates/soar-operations/Cargo.toml`:
- Around line 16-34: The Cargo.toml for the soar-operations crate includes seven
unused workspace dependencies; remove the entries for nix, regex, semver, serde,
serde_json, toml, and ureq from the [dependencies] list so they are no longer
declared directly (they remain available transitively via soar-core,
soar-config, soar-db, soar-dl, soar-package, soar-registry, and soar-utils);
ensure no code in the crate imports these crates before removing them and run
cargo check to verify the manifest is clean.
In `@crates/soar-operations/src/apply.rs`:
- Around line 222-261: version_updates is currently built from all
diff.to_install entries then applied if any install succeeded, causing failed
installs to still update packages.toml; change the logic so you first call
perform_installation(ctx, targets, &options).await? and then filter
version_updates to only the package names present in report.installed before
calling PackagesConfig::update_package; match by pkg name (use the pkg_name
strings collected in version_updates and the entries in report.installed) so
only successfully installed packages are updated. Apply the same filtering
approach to update_version_updates in the update section (use the
successful-update report to restrict which versions are written).
- Around line 41-193: The build breaks because Option::is_none_or is used (e.g.,
in compute_diff when checking pkg.version and in the pruning declared-key
checks) but that method requires Rust 1.82; either declare the MSRV by adding
rust-version = "1.82" to the root Cargo.toml, or replace all uses of is_none_or
with a compatible alternative (e.g., match/unwrap_or or map_or) in the affected
functions (compute_diff in crates/soar-operations and the analogous apply
handling in crates/soar-cli) so the code compiles on older toolchains.
In `@crates/soar-operations/src/context.rs`:
- Around line 145-165: The SyncStage events are emitted in the wrong order:
SyncStage::Decompressing is correctly emitted before handling MetadataContent,
but SyncStage::WritingDatabase is emitted after the metadata DB has already been
written and before validate_packages, which misleads UIs; move the
self.inner.events.emit(SoarEvent::SyncProgress { stage:
SyncStage::WritingDatabase, ... }) so it occurs immediately before the metadata
write logic (i.e., before matching on MetadataContent and calling
write_metadata_db or handle_json_metadata), or alternatively introduce and emit
a new stage (e.g., SyncStage::Validating or SyncStage::PostWriteValidation)
around the call to validate_packages(repo, &etag). Ensure you update the emit
location near the existing self.inner.events.emit call and keep the repo_name
cloning consistent.
In `@crates/soar-operations/src/health.rs`:
- Around line 47-82: The RemovedInfo entries are being created with empty
repo_name and version because package is moved into installed_pkg via
package.into(), so repo_name/version must be extracted beforehand; capture let
repo_name = package.repo_name.clone() and let version = package.version.clone()
(or similar) before the line that does let installed_pkg = package.into(), then
use those variables when pushing RemovedInfo (instead of String::new()) so
RemovedInfo preserves the original repo_name and version.
In `@crates/soar-operations/src/install.rs`:
- Around line 769-772: The code calls config.get_packages_path(...) and uses
.unwrap() to build install_dir which can panic; change this to propagate the
error instead: replace the .unwrap() usage in the install_dir construction so
get_packages_path returns a Result and use the ? operator (or map_err if you
need to convert the error) to propagate failures from get_packages_path up the
current function (the function that constructs install_dir in install.rs),
ensuring the function's return type supports the error. Update any error
conversion as needed so install_dir =
config.get_packages_path(target.profile.clone())? .join(format!("{}-{}-{}",
pkg.pkg_name, pkg.pkg_id, dir_suffix)); is used conceptually.
In `@crates/soar-operations/src/list.rs`:
- Around line 104-144: The function list_installed is declared async but
contains no await points; remove the async to avoid unnecessary Future overhead
by changing "pub async fn list_installed(... ) ->
SoarResult<InstalledListResult>" to a synchronous "pub fn list_installed(... )
-> SoarResult<InstalledListResult>" (update any call sites that awaited it), or
if you intentionally want it async for API uniformity, add a clarifying comment
above list_installed explaining the rationale; ensure references to symbols like
list_installed, diesel_db.with_conn, CoreRepository::list_filtered and dir_size
remain correct after the change.
In `@crates/soar-operations/src/run.rs`:
- Around line 41-44: The code returns a cached binary when output_path
(constructed from cache_bin and package_name) exists without validating
integrity; update the logic in the function that returns
PrepareRunResult::Ready(output_path) to compute and verify a checksum (e.g.,
SHA256) of output_path against the trusted checksum stored in the package
metadata/cache metadata for package_name, and only return
PrepareRunResult::Ready when they match; if the checksum is missing or
mismatches, delete or ignore the cached file and trigger the fresh
prepare/download path (or return an error), and add a short comment explaining
the trade-off if you choose to skip verification under specific conditions.
- Around line 156-206: The function download_to_cache blocks the async runtime,
keeps an unused ctx, and can break checksum verification after extracting
archives; fix by (1) removing the unused ctx parameter from download_to_cache
and its callers, (2) running the blocking downloads OciDownload::execute() and
Download::execute() inside tokio::task::spawn_blocking(|_| { ... }) and .await
the JoinHandle so the tokio thread isn't blocked, and (3) preserve a single
canonical file at output_path for checksum: after extraction do not delete the
archive immediately—either rename/move the extracted file that corresponds to
package.name (or if the archive produced a single file, move that entry to
output_path) so output_path exists for calculate_checksum(&output_path), and
only then remove any temporary files/directories; reference symbols:
download_to_cache, OciDownload::execute, Download::execute, output_path,
extract_dir, package.download_url, package.name.
In `@crates/soar-operations/src/switch.rs`:
- Around line 88-176: The current flow calls
CoreRepository::unlink_others_by_checksum(...) in one transaction, does
filesystem work (mangle_package_symlinks, integrate_package, setup_portable_dir)
that can fail, then calls CoreRepository::link_by_checksum(...) in a separate
transaction — if a crash occurs between them all variants remain unlinked. Fix
by making the DB mutations atomic: either perform link_by_checksum and
unlink_others_by_checksum in the same transaction that also guards the success
of the FS steps, or at minimum change the order to call
CoreRepository::link_by_checksum(...) first (so the selected variant is linked)
and then CoreRepository::unlink_others_by_checksum(...), and if you keep
separate transactions add compensating rollback logic to relink on FS failure;
locate and change the calls to CoreRepository::unlink_others_by_checksum,
CoreRepository::link_by_checksum and the surrounding uses of
mangle_package_symlinks, integrate_package, and setup_portable_dir to implement
the chosen approach.
🧹 Nitpick comments (12)
crates/soar-operations/src/context.rs (1)
89-105:sync()andmetadata_manager()runinit_repo_dbsindependently.
sync()callsinit_repo_dbs(true), but it does not populate themetadata_managerAsyncOnceCell. A subsequent call tometadata_manager()will invokeinit_repo_dbs(false)again. Thanks to ETags this won't re-download, but it still performs per-repo network round-trips and re-opens DB connections unnecessarily.Consider having
sync()also seed themetadata_managercell, or refactoring so thatmetadata_manager()skipsinit_repo_dbswhen repos are already synced.crates/soar-operations/src/list.rs (1)
123-137:dir_sizeis called sequentially for each installed package — consider parallelizing.
dir_sizeperforms recursive filesystem I/O per package, which can be slow with many installed packages. Thelist_packagesfunction already usesinto_par_iter()for a lighter workload. This loop would benefit more from parallel iteration.Sketch using rayon
- let mut total_size = 0u64; - let total_count = packages.len(); - - let entries: Vec<InstalledEntry> = packages - .into_iter() - .map(|package| { - let installed_path = PathBuf::from(&package.installed_path); - let disk_size = dir_size(&installed_path).unwrap_or(0); - let is_healthy = package.is_installed && installed_path.exists(); - total_size += disk_size; - - InstalledEntry { - package, - disk_size, - is_healthy, - } - }) - .collect(); + let total_count = packages.len(); + + let entries: Vec<InstalledEntry> = packages + .into_par_iter() + .map(|package| { + let installed_path = PathBuf::from(&package.installed_path); + let disk_size = dir_size(&installed_path).unwrap_or(0); + let is_healthy = package.is_installed && installed_path.exists(); + InstalledEntry { + package, + disk_size, + is_healthy, + } + }) + .collect(); + + let total_size: u64 = entries.iter().map(|e| e.disk_size).sum();crates/soar-operations/src/update.rs (2)
428-435: Avoid cloning the fullVec<InstallTarget>for the cleanup loop.
targets.clone()on line 435 deep-copies everyInstallTarget(includingPackageandexisting_install) just so the vector can be reused in the cleanup loop (lines 440-469). Extract the lightweight data needed for cleanup before consumingtargets.Proposed fix
+ // Collect cleanup info before consuming targets + let cleanup_info: Vec<_> = targets + .iter() + .map(|t| { + ( + t.package.pkg_name.clone(), + t.package.pkg_id.clone(), + t.existing_install.as_ref().map(|e| e.version.clone()), + t.package.clone(), // needed for remove_old_versions + ) + }) + .collect(); + let options = InstallOptions { no_verify: false, ..Default::default() }; - let install_report = perform_installation(ctx, targets.clone(), &options).await?; + let install_report = perform_installation(ctx, targets, &options).await?; // Clean up old versions if !keep_old { let diesel_db = ctx.diesel_core_db()?.clone(); - for target in &targets { + for (pkg_name, pkg_id, old_version, package) in &cleanup_info { let op_id = next_op_id(); ctx.events().emit(SoarEvent::UpdateCleanup { op_id, - pkg_name: target.package.pkg_name.clone(), - pkg_id: target.package.pkg_id.clone(), - old_version: target - .existing_install - .as_ref() - .map(|e| e.version.clone()) - .unwrap_or_default(), + pkg_name: pkg_name.clone(), + pkg_id: pkg_id.clone(), + old_version: old_version.clone().unwrap_or_default(), stage: UpdateCleanupStage::Removing, }); - let _ = remove_old_versions(&target.package, &diesel_db, false); + let _ = remove_old_versions(package, &diesel_db, false);
229-278:_update_toml_urlis dead code – computed but never propagated.The
toml_url/_update_toml_urlvalue (line 273-278) is calculated in both branches ofcheck_local_updatebut bound to an underscore-prefixed name and never included inUpdateInfo. Meanwhile,perform_update(line 422) hardcodesNonefor the URL tracking. If URL updates inpackages.tomlaren't needed (because URLs use{version}placeholders), this computation can be removed to reduce confusion.crates/soar-operations/src/apply.rs (1)
596-643:create_install_targetandcreate_url_install_targetare near-identical.Both functions differ only in how
packageis sourced. Consider unifying them into a single helper that accepts aPackagedirectly, sinceurl_pkg.to_package()can be called at the call site.Sketch
-fn create_url_install_target( - url_pkg: &UrlPackage, - resolved: &ResolvedPackage, - existing: Option<InstalledPackage>, -) -> InstallTarget { - InstallTarget { - package: url_pkg.to_package(), - ... - } -} // At call sites, replace: - create_url_install_target(&url_pkg, pkg, installed) + create_install_target(pkg, url_pkg.to_package(), installed)crates/soar-operations/src/install.rs (1)
718-732: Lock acquisition loop has no upper bound – could spin indefinitely.If a lock is held by a crashed process or stuck operation, this loop will block forever (sleeping 500ms per iteration with no max retries). Consider adding a maximum retry count or overall timeout.
Proposed fix
let mut lock_attempts = 0; + const MAX_LOCK_ATTEMPTS: u32 = 120; // 60 seconds at 500ms intervals let _package_lock = loop { match FileLock::try_acquire(&pkg.pkg_name) { Ok(Some(lock)) => break Ok(lock), Ok(None) => { lock_attempts += 1; + if lock_attempts >= MAX_LOCK_ATTEMPTS { + break Err(SoarError::Custom(format!( + "Timed out waiting for lock on '{}'", + pkg.pkg_name + ))); + } if lock_attempts == 1 { debug!("waiting for lock on '{}'", pkg.pkg_name); } tokio::time::sleep(Duration::from_millis(500)).await; } Err(err) => break Err(err), } }crates/soar-operations/src/utils.rs (1)
45-53:mangle_package_symlinksis declaredasyncbut contains no.awaitpoints and should be a regularfn.The function body is entirely synchronous. Declaring it
asyncwraps it in aFutureunnecessarily, adding overhead at every call site. This needs to be updated in four locations:
crates/soar-operations/src/install.rs(line 943)crates/soar-operations/src/switch.rs(line 98)crates/soar-cli/src/install.rs(line 1408)crates/soar-cli/src/use.rs(line 88)Note: There's also a duplicate implementation of this function in
crates/soar-cli/src/utils.rs(line 237) with the same issue.Proposed change for crates/soar-operations/src/utils.rs
-pub async fn mangle_package_symlinks( +pub fn mangle_package_symlinks( install_dir: &Path, bin_dir: &Path, provides: Option<&[PackageProvide]>,Then remove
.awaitfrom all four call sites listed above.crates/soar-operations/src/types.rs (5)
79-87: Consider extracting a struct for theAmbiguousvariant (mirrorsAmbiguousPackage).
ResolveResult::AmbiguouswrapsAmbiguousPackage { query, candidates: Vec<Package> }, butRemoveResolveResult::Ambiguoususes inline fields withVec<InstalledPackage>. The different inner types justify separate structs, but introducing a parallelAmbiguousInstalledPackage(or makingAmbiguousPackagegeneric) would keep the pattern consistent across resolve result types.
127-157:SearchEntryandPackageListEntryare structurally identical — consider unifying.Both structs contain
{ package: Package, installed: bool }. If they're not expected to diverge, a single type (or a type alias) would reduce duplication. If they will diverge, a brief doc comment explaining the intent would help.Also, the count field naming is inconsistent across the three result types:
total_countinSearchResult(line 129) andInstalledListResult(line 149), buttotalinPackageListResult(line 139). Aligning on one name improves API predictability.
168-172:installed_pathusesStringwhile other path fields usePathBuf.Every other path-like field in this file (
install_dir,bin_path,broken_symlinks, etc.) isPathBuf. Ifinstalled_pathoriginates from a DB text column, animpl Fromor conversion at the boundary would keep the public API consistent.
176-201: Nit:soar_config::packages::ResolvedPackageis used fully-qualified inline.Consider adding it to the imports at the top of the file for consistency with how
soar_coretypes are imported. This appears twice (lines 180 and 182).Suggested diff
use soar_core::{ database::models::{InstalledPackage, Package}, package::install::InstallTarget, }; +use soar_config::packages::ResolvedPackage;Then replace
soar_config::packages::ResolvedPackagewithResolvedPackageon lines 180 and 182.
50-75: Most public types lackDebug(and potentiallyClone) derives.Only
InstallOptions,InstalledInfo, andFailedInfoderiveDebug. Since this is a new public API surface, adding#[derive(Debug)]broadly would make consumer debugging and logging significantly easier. For simple data-only structs likeRemovedInfo,BrokenPackage,RunResult,ApplyReport, etc.,Cloneis also essentially free and often useful.Also applies to: 89-99, 103-123, 127-157, 161-172, 203-226
| pub async fn compute_diff( | ||
| ctx: &SoarContext, | ||
| resolved: &[ResolvedPackage], | ||
| prune: bool, | ||
| ) -> SoarResult<ApplyDiff> { | ||
| debug!( | ||
| count = resolved.len(), | ||
| prune = prune, | ||
| "computing apply diff" | ||
| ); | ||
| let metadata_mgr = ctx.metadata_manager().await?; | ||
| let diesel_db = ctx.diesel_core_db()?.clone(); | ||
|
|
||
| let mut diff = ApplyDiff::default(); | ||
| let mut declared_keys: HashSet<(String, Option<String>, Option<String>)> = HashSet::new(); | ||
|
|
||
| for pkg in resolved { | ||
| declared_keys.insert((pkg.name.clone(), pkg.pkg_id.clone(), pkg.repo.clone())); | ||
|
|
||
| let is_github_or_gitlab = pkg.github.is_some() || pkg.gitlab.is_some(); | ||
| if is_github_or_gitlab || pkg.url.is_some() { | ||
| handle_local_package(pkg, is_github_or_gitlab, &diesel_db, &mut diff)?; | ||
| continue; | ||
| } | ||
|
|
||
| // Find package in metadata | ||
| let found_packages: Vec<Package> = if let Some(ref repo_name) = pkg.repo { | ||
| metadata_mgr | ||
| .query_repo(repo_name, |conn| { | ||
| MetadataRepository::find_filtered( | ||
| conn, | ||
| Some(&pkg.name), | ||
| pkg.pkg_id.as_deref(), | ||
| pkg.version.as_deref(), | ||
| None, | ||
| Some(SortDirection::Asc), | ||
| ) | ||
| })? | ||
| .unwrap_or_default() | ||
| .into_iter() | ||
| .map(|p| { | ||
| let mut package: Package = p.into(); | ||
| package.repo_name = repo_name.clone(); | ||
| package | ||
| }) | ||
| .collect() | ||
| } else { | ||
| metadata_mgr.query_all_flat(|repo_name, conn| { | ||
| let pkgs = MetadataRepository::find_filtered( | ||
| conn, | ||
| Some(&pkg.name), | ||
| pkg.pkg_id.as_deref(), | ||
| pkg.version.as_deref(), | ||
| None, | ||
| Some(SortDirection::Asc), | ||
| )?; | ||
| Ok(pkgs | ||
| .into_iter() | ||
| .map(|p| { | ||
| let mut package: Package = p.into(); | ||
| package.repo_name = repo_name.to_string(); | ||
| package | ||
| }) | ||
| .collect()) | ||
| })? | ||
| }; | ||
|
|
||
| if found_packages.is_empty() { | ||
| diff.not_found.push(pkg.name.clone()); | ||
| continue; | ||
| } | ||
|
|
||
| let metadata_pkg = found_packages.into_iter().next().unwrap(); | ||
|
|
||
| let installed_packages: Vec<InstalledPackage> = diesel_db | ||
| .with_conn(|conn| { | ||
| CoreRepository::list_filtered( | ||
| conn, | ||
| Some(&metadata_pkg.repo_name), | ||
| Some(&metadata_pkg.pkg_name), | ||
| Some(&metadata_pkg.pkg_id), | ||
| None, | ||
| None, | ||
| None, | ||
| None, | ||
| Some(SortDirection::Asc), | ||
| ) | ||
| })? | ||
| .into_iter() | ||
| .map(Into::into) | ||
| .collect(); | ||
|
|
||
| let existing_install = installed_packages.into_iter().find(|ip| ip.is_installed); | ||
|
|
||
| if let Some(ref existing) = existing_install { | ||
| let version_matches = pkg.version.as_ref().is_none_or(|v| existing.version == *v); | ||
|
|
||
| if version_matches && existing.version == metadata_pkg.version { | ||
| diff.in_sync.push(format!( | ||
| "{}#{}@{}", | ||
| existing.pkg_name, existing.pkg_id, existing.version | ||
| )); | ||
| } else if !existing.pinned || pkg.version.is_some() { | ||
| let target = create_install_target(pkg, metadata_pkg, Some(existing.clone())); | ||
| diff.to_update.push((pkg.clone(), target)); | ||
| } else { | ||
| diff.in_sync.push(format!( | ||
| "{}#{}@{} (pinned)", | ||
| existing.pkg_name, existing.pkg_id, existing.version | ||
| )); | ||
| } | ||
| } else { | ||
| let target = create_install_target(pkg, metadata_pkg, None); | ||
| diff.to_install.push((pkg.clone(), target)); | ||
| } | ||
| } | ||
|
|
||
| if prune { | ||
| let all_installed: Vec<InstalledPackage> = diesel_db | ||
| .with_conn(|conn| { | ||
| CoreRepository::list_filtered( | ||
| conn, | ||
| None, | ||
| None, | ||
| None, | ||
| None, | ||
| None, | ||
| None, | ||
| None, | ||
| Some(SortDirection::Asc), | ||
| ) | ||
| })? | ||
| .into_iter() | ||
| .filter(|p| p.is_installed) | ||
| .map(Into::into) | ||
| .collect(); | ||
|
|
||
| for installed in all_installed { | ||
| let is_declared = declared_keys.iter().any(|(name, pkg_id, repo)| { | ||
| let name_matches = *name == installed.pkg_name; | ||
| let pkg_id_matches = pkg_id.as_ref().is_none_or(|id| *id == installed.pkg_id); | ||
| let repo_matches = repo.as_ref().is_none_or(|r| *r == installed.repo_name); | ||
| name_matches && pkg_id_matches && repo_matches | ||
| }); | ||
|
|
||
| if !is_declared { | ||
| diff.to_remove.push(installed); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| Ok(diff) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for MSRV in Cargo.toml files at root and workspace members
echo "=== Checking root Cargo.toml ==="
if [ -f Cargo.toml ]; then
grep -E "rust-version|edition" Cargo.toml
fi
echo -e "\n=== Checking workspace members ==="
fd Cargo.toml -x grep -H "rust-version\|edition" {} \;
echo -e "\n=== Checking if is_none_or is used in the codebase ==="
rg 'is_none_or' --type rust -nRepository: pkgforge/soar
Length of output: 1574
🌐 Web query:
When was Option::is_none_or stabilized in Rust? Which Rust version?
💡 Result:
Option::is_none_or was stabilized in Rust 1.82.0, released October 17, 2024. [1], [2]
Add rust-version = "1.82" to Cargo.toml or replace is_none_or with a compatible alternative.
Option::is_none_or (used at lines 136, 181–182 in crates/soar-operations/src/apply.rs and at lines 463, 508–509 in crates/soar-cli/src/apply.rs) was stabilized in Rust 1.82.0. The project currently has no explicit MSRV declared in Cargo.toml. Add rust-version = "1.82" to the root Cargo.toml to reflect this requirement, or refactor to use an alternative approach compatible with earlier Rust versions.
🤖 Prompt for AI Agents
In `@crates/soar-operations/src/apply.rs` around lines 41 - 193, The build breaks
because Option::is_none_or is used (e.g., in compute_diff when checking
pkg.version and in the pruning declared-key checks) but that method requires
Rust 1.82; either declare the MSRV by adding rust-version = "1.82" to the root
Cargo.toml, or replace all uses of is_none_or with a compatible alternative
(e.g., match/unwrap_or or map_or) in the affected functions (compute_diff in
crates/soar-operations and the analogous apply handling in crates/soar-cli) so
the code compiles on older toolchains.
| let output_path = cache_bin.join(package_name); | ||
| if output_path.exists() { | ||
| return Ok(PrepareRunResult::Ready(output_path)); | ||
| } |
There was a problem hiding this comment.
Cached binary is returned without checksum verification.
If output_path already exists, it is returned immediately (line 43) without any integrity check. A corrupted or tampered file in the cache will be executed as-is. Consider verifying the checksum against metadata when a cached copy is found, or at least noting the trade-off in a comment.
🤖 Prompt for AI Agents
In `@crates/soar-operations/src/run.rs` around lines 41 - 44, The code returns a
cached binary when output_path (constructed from cache_bin and package_name)
exists without validating integrity; update the logic in the function that
returns PrepareRunResult::Ready(output_path) to compute and verify a checksum
(e.g., SHA256) of output_path against the trusted checksum stored in the package
metadata/cache metadata for package_name, and only return
PrepareRunResult::Ready when they match; if the checksum is missing or
mismatches, delete or ignore the cached file and trigger the fresh
prepare/download path (or return an error), and add a short comment explaining
the trade-off if you choose to skip verification under specific conditions.
| fn download_to_cache( | ||
| ctx: &SoarContext, | ||
| package: &Package, | ||
| output_path: &Path, | ||
| cache_bin: &Path, | ||
| progress_callback: Arc<dyn Fn(soar_dl::types::Progress) + Send + Sync>, | ||
| ) -> SoarResult<()> { | ||
| let _ = ctx; | ||
| if let Some(ref url) = package.ghcr_blob { | ||
| let cb = progress_callback.clone(); | ||
| let mut dl = OciDownload::new(url.as_str()) | ||
| .output(output_path.to_string_lossy()) | ||
| .overwrite(OverwriteMode::Force); | ||
| dl = dl.progress(move |p| { | ||
| cb(p); | ||
| }); | ||
| dl.execute()?; | ||
| } else { | ||
| let extract_dir = get_extract_dir(cache_bin); | ||
| let cb = progress_callback.clone(); | ||
| let mut dl = Download::new(&package.download_url) | ||
| .output(output_path.to_string_lossy()) | ||
| .overwrite(OverwriteMode::Force) | ||
| .extract(true) | ||
| .extract_to(&extract_dir); | ||
| dl = dl.progress(move |p| { | ||
| cb(p); | ||
| }); | ||
|
|
||
| let file_name = dl.execute()?; | ||
| if extract_dir.exists() { | ||
| fs::remove_file(file_name).ok(); | ||
|
|
||
| for entry in fs::read_dir(&extract_dir) | ||
| .with_context(|| format!("reading {} directory", extract_dir.display()))? | ||
| { | ||
| let entry = entry.with_context(|| { | ||
| format!("reading entry from directory {}", extract_dir.display()) | ||
| })?; | ||
| let from = entry.path(); | ||
| let to = cache_bin.join(entry.file_name()); | ||
| fs::rename(&from, &to) | ||
| .with_context(|| format!("renaming {} to {}", from.display(), to.display()))?; | ||
| } | ||
|
|
||
| fs::remove_dir_all(&extract_dir).ok(); | ||
| } | ||
| } | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "*.rs" | grep -E "(soar-dl|soar-operations)" | head -20Repository: pkgforge/soar
Length of output: 712
🏁 Script executed:
rg -n "fn execute" crates/soar-dl/src/ -A 3 -B 1Repository: pkgforge/soar
Length of output: 1137
🏁 Script executed:
rg -n "fn download_to_cache" crates/soar-operations/src/run.rs -A 2 -B 2Repository: pkgforge/soar
Length of output: 147
🏁 Script executed:
rg -n "calculate_checksum" crates/soar-operations/src/run.rs -B 5 -A 5Repository: pkgforge/soar
Length of output: 889
🏁 Script executed:
rg -n "fn run" crates/soar-operations/src/run.rs -B 2 -A 5 | head -40Repository: pkgforge/soar
Length of output: 39
🏁 Script executed:
cat -n crates/soar-operations/src/run.rs | sed -n '110,130p'Repository: pkgforge/soar
Length of output: 901
🏁 Script executed:
cat -n crates/soar-dl/src/download.rs | sed -n '200,250p'Repository: pkgforge/soar
Length of output: 2199
🏁 Script executed:
cat -n crates/soar-dl/src/download.rs | sed -n '300,360p'Repository: pkgforge/soar
Length of output: 3116
🏁 Script executed:
rg -n "^pub fn run\|^async fn run" crates/soar-operations/src/run.rs -A 3Repository: pkgforge/soar
Length of output: 39
🏁 Script executed:
cat -n crates/soar-operations/src/run.rs | sed -n '100,125p'Repository: pkgforge/soar
Length of output: 1020
🏁 Script executed:
cat -n crates/soar-operations/src/run.rs | sed -n '156,210p'Repository: pkgforge/soar
Length of output: 2140
🏁 Script executed:
cat -n crates/soar-operations/src/run.rs | sed -n '50,130p'Repository: pkgforge/soar
Length of output: 3052
🏁 Script executed:
rg -n "fn.*\(" crates/soar-operations/src/run.rs | head -20Repository: pkgforge/soar
Length of output: 194
🏁 Script executed:
cat -n crates/soar-operations/src/run.rs | sed -n '25,50p'Repository: pkgforge/soar
Length of output: 1171
🏁 Script executed:
rg -n "output_path" crates/soar-operations/src/run.rs | head -10Repository: pkgforge/soar
Length of output: 537
🏁 Script executed:
cat -n crates/soar-operations/src/run.rs | sed -n '40,45p'Repository: pkgforge/soar
Length of output: 245
🏁 Script executed:
cat -n crates/soar-dl/src/oci.rs | sed -n '360,400p'Repository: pkgforge/soar
Length of output: 1679
download_to_cache performs blocking I/O on an async thread, unused ctx parameter, and may fail checksum verification.
-
OciDownload::execute()andDownload::execute()are synchronous/blocking calls invoked fromasync fn prepare_run()(line 25). This blocks the tokio runtime thread and should be wrapped intokio::task::spawn_blocking(). -
Line 163:
let _ = ctx;— Remove the unusedctxparameter. -
Extraction breaks checksum verification. In the
Downloadbranch (lines 176–202), the archive file is removed at line 187 after extraction, and extracted entries are moved tocache_binwith their original filenames. If the archive doesn't produce a file named exactlypackage_name,output_path(=cache_bin.join(package_name)) won't exist whencalculate_checksum(&output_path)runs at line 125, causing a runtime error.
🤖 Prompt for AI Agents
In `@crates/soar-operations/src/run.rs` around lines 156 - 206, The function
download_to_cache blocks the async runtime, keeps an unused ctx, and can break
checksum verification after extracting archives; fix by (1) removing the unused
ctx parameter from download_to_cache and its callers, (2) running the blocking
downloads OciDownload::execute() and Download::execute() inside
tokio::task::spawn_blocking(|_| { ... }) and .await the JoinHandle so the tokio
thread isn't blocked, and (3) preserve a single canonical file at output_path
for checksum: after extraction do not delete the archive immediately—either
rename/move the extracted file that corresponds to package.name (or if the
archive produced a single file, move that entry to output_path) so output_path
exists for calculate_checksum(&output_path), and only then remove any temporary
files/directories; reference symbols: download_to_cache, OciDownload::execute,
Download::execute, output_path, extract_dir, package.download_url, package.name.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@crates/soar-operations/src/context.rs`:
- Around line 182-187: init_repo_dbs currently swallows fetch_metadata failures
by emitting SoarEvent::Log (SoarEvent::Log, LogLevel::Error) and always
returning Ok(()); change it to collect per-repo errors from fetch_metadata (use
the repo identifier like repo.name) and propagate a non-Ok result when
appropriate: either return an Err (or a Result type carrying a Vec of
(repo.name, error)) when all repos fail, or change the function signature to
return a partial-failure indicator (e.g., Result<(), InitRepoDbsError> or
Result<PartialSuccess, InitRepoDbsError>) and include the collected per-repo
errors so callers of init_repo_dbs / sync / metadata_manager can detect and
handle failed repo syncs instead of assuming success.
In `@crates/soar-operations/src/install.rs`:
- Around line 718-732: The current unbounded loop around FileLock::try_acquire
for pkg.pkg_name (using lock_attempts) can hang indefinitely on stale locks; add
a maximum retry count or overall timeout (e.g., max_attempts or deadline using
Instant::now()) and break with a clear SoarError::Custom if exceeded; update the
loop that assigns _package_lock to check the limit before sleeping (and log a
warning when first waiting) so the function fails fast with a descriptive error
instead of blocking forever.
In `@crates/soar-operations/src/update.rs`:
- Around line 438-469: The cleanup loop currently runs remove_old_versions for
every target regardless of installation success; update it to consult the
install_report produced by perform_installation and skip cleanup for targets
that failed. Specifically, in the block that iterates over targets (referencing
targets, remove_old_versions, ctx.events, and SoarEvent::UpdateCleanup), match
each target to its corresponding entry in install_report (by index or pkg_id)
and only emit the Removing and Complete UpdateCleanup events and call
remove_old_versions when the install_report entry indicates success; for failed
entries, do not remove old versions or emit those cleanup stages. Ensure op_id
is still generated per-target as before and preserve the keep_old conditional.
In `@crates/soar-operations/src/utils.rs`:
- Around line 196-201: The current code in set_executable reads the file mode
into mode and then does perms.set_mode(mode | 0o755), which can widen read
permissions for group/others; change it to only add execute bits by OR-ing with
0o111 (i.e., use perms.set_mode(mode | 0o111)) so existing read/write bits are
preserved, and continue to call fs::set_permissions(path, perms). Ensure you
update the mask in the block that checks if mode & 0o111 == 0 and keep the
existing with_context(...) error message.
🧹 Nitpick comments (10)
crates/soar-operations/src/context.rs (1)
89-99:metadata_managerinitialization has a subtle race with concurrentsynccalls.If
sync()andmetadata_manager()are called concurrently,metadata_managercallsinit_repo_dbs(false)inside theAsyncOnceCell, whilesync()callsinit_repo_dbs(true). TheAsyncOnceCellwill only run the initializer once, so ifmetadata_manager()wins the race, thesync()call'sinit_repo_dbs(true)still runs but the metadata manager was already created from the non-force path's results. Conversely, ifsync()runs first,metadata_manager()will use the cell's cached value without re-scanning repos.This may be intentional, but worth a comment clarifying the expected call order (sync before metadata_manager).
crates/soar-operations/src/types.rs (1)
127-145:SearchEntryandPackageListEntryare structurally identical.Both hold
package: Packageandinstalled: bool. Similarly,SearchResultandPackageListResultdiffer only in the name of the count field (total_countvstotal). Consider unifying them or adding a type alias to reduce duplication, unless you anticipate divergence.crates/soar-operations/src/list.rs (1)
123-137: Side-effecting+= disk_sizeinside.map()is a code smell.Mutating
total_sizeinside the.map()closure works because the iterator is sequential, but it's fragile — switching topar_iterlater would silently introduce a data race. Consider using.fold()or computing the sum separately.Suggested approach
- let mut total_size = 0u64; let total_count = packages.len(); let entries: Vec<InstalledEntry> = packages .into_iter() .map(|package| { let installed_path = PathBuf::from(&package.installed_path); let disk_size = dir_size(&installed_path).unwrap_or(0); let is_healthy = package.is_installed && installed_path.exists(); - total_size += disk_size; InstalledEntry { package, disk_size, is_healthy, } }) .collect(); + let total_size: u64 = entries.iter().map(|e| e.disk_size).sum(); + Ok(InstalledListResult { packages: entries, total_count, total_size, })crates/soar-operations/src/utils.rs (2)
45-45:mangle_package_symlinksisasyncbut contains no.awaitpoints.Remove the
asyncqualifier to avoid unnecessaryFuturewrapping, or add a comment if this is intentional for interface consistency.
129-131: Unnecessary.clone()onprovide.strategyinmatches!.
matches!only needs a reference for pattern matching. Use.as_ref()to avoid cloning.- (Some(_), Some(ProvideStrategy::KeepBoth)) | (None, _) + (Some(_), Some(&ProvideStrategy::KeepBoth)) | (None, _)or restructure the match to borrow:
- let needs_original_symlink = matches!( - (provide.target.as_ref(), provide.strategy.clone()), - (Some(_), Some(ProvideStrategy::KeepBoth)) | (None, _) - ); + let needs_original_symlink = matches!( + (provide.target.as_ref(), provide.strategy.as_ref()), + (Some(_), Some(ProvideStrategy::KeepBoth)) | (None, _) + );crates/soar-operations/src/remove.rs (1)
170-175:RunningHook("pre_remove")is emitted unconditionally, even when no hooks are configured.The UI may display "Running pre_remove hook…" for packages that have no hooks. Consider emitting this event only when
get_package_hooksreturnsSomehooks, or using a more generic stage likeRemoveStage::Starting.crates/soar-operations/src/update.rs (2)
229-229:_update_toml_urlis computed incheck_local_updatebut never consumed;url_trackingalways sets URL toNone.In
check_local_update, the tuple element_update_toml_url(line 229) is calculated with non-trivial logic (lines 273–278, 313–318) but is discarded via the_prefix. Meanwhile, inperform_update(lines 411–426),url_trackinghard-codesNonefor the URL with the comment "URL tracking is handled in check_local_update"—butcheck_local_updatedoesn't actually persist it anywhere either.This means
UrlUpdateInfo::new_urlwill always beNone, and the TOML URL logic is dead code. If URL tracking in packages.toml is intended, it should be threaded throughUpdateInfo. If not, the computation incheck_local_update(lines 273–278, 313–318) should be removed to avoid confusion.Also applies to: 273-278, 411-426
435-435:targets.clone()may be expensive for large update batches.
perform_installationconsumes theVec<InstallTarget>, and the targets are needed again for the cleanup loop. Consider restructuring to avoid the clone—e.g., extract the cleanup-relevant data (package name/id/version/existing_install) into a lightweight struct before passing ownership toperform_installation.crates/soar-operations/src/install.rs (1)
415-492:resolve_normal: existing install lookup may miss the correct record when force-reinstalling.On line 470–473,
existing_installis resolved by matching the new package version against installed records. If the user is force-reinstalling and the new version differs from all installed records,existing_installwill beNone, losing settings like portable paths and profile from the current installation. Consider falling back to the first installed record (already available asmaybe_existingfrom line 440) when no version match is found.Proposed fix
let existing_install = installed_packages .iter() .find(|ip| ip.version == pkg.version) - .cloned(); + .cloned() + .or(maybe_existing.clone());crates/soar-operations/src/apply.rs (1)
596-644:create_install_targetandcreate_url_install_targetare near-duplicates.These two functions differ only in how
packageis obtained (passed directly vs.url_pkg.to_package()). Consider unifying into a single helper that takes aPackage:Proposed refactor
-fn create_url_install_target( - url_pkg: &UrlPackage, - resolved: &ResolvedPackage, - existing: Option<InstalledPackage>, -) -> InstallTarget { - InstallTarget { - package: url_pkg.to_package(), - existing_install: existing, - // ... identical fields ... - } -} +fn create_url_install_target( + url_pkg: &UrlPackage, + resolved: &ResolvedPackage, + existing: Option<InstalledPackage>, +) -> InstallTarget { + create_install_target(resolved, url_pkg.to_package(), existing) +}
| Err(err) => { | ||
| self.inner.events.emit(SoarEvent::Log { | ||
| level: LogLevel::Error, | ||
| message: format!("Failed to sync repository {}: {err}", repo.name), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Sync errors are logged but silently swallowed — caller sees Ok(()).
When fetch_metadata fails for a repo (line 182), the error is emitted as a log event but init_repo_dbs still returns Ok(()). This means sync() and metadata_manager() both succeed even when all repos fail to sync, which could lead to an empty/stale metadata manager without the caller knowing.
Consider collecting per-repo errors and returning at least a partial-failure indicator, or failing if all repos error out.
🤖 Prompt for AI Agents
In `@crates/soar-operations/src/context.rs` around lines 182 - 187, init_repo_dbs
currently swallows fetch_metadata failures by emitting SoarEvent::Log
(SoarEvent::Log, LogLevel::Error) and always returning Ok(()); change it to
collect per-repo errors from fetch_metadata (use the repo identifier like
repo.name) and propagate a non-Ok result when appropriate: either return an Err
(or a Result type carrying a Vec of (repo.name, error)) when all repos fail, or
change the function signature to return a partial-failure indicator (e.g.,
Result<(), InitRepoDbsError> or Result<PartialSuccess, InitRepoDbsError>) and
include the collected per-repo errors so callers of init_repo_dbs / sync /
metadata_manager can detect and handle failed repo syncs instead of assuming
success.
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@crates/soar-operations/src/context.rs`:
- Around line 257-265: In create_diesel_core_db, ensure the parent directory
exists before calling File::create: get the parent dir from
core_db_file.parent(), call std::fs::create_dir_all(parent) with context on
error, then proceed to File::create and opening the DB; update the function
(create_diesel_core_db) to import std::fs and add a
create_dir_all(parent).with_context(...) call (handling the None parent case
with an appropriate error) before File::create to avoid "No such file or
directory" failures.
In `@crates/soar-operations/src/install.rs`:
- Around line 898-940: The checksum verification is currently only executed when
pkg.provides.is_some(), which causes packages without provides to skip
verification; remove that gating so checksum verification runs whenever
no_verify is false and a checksum candidate exists (e.g., when
downloaded_checksum.is_some() or real_bin exists for ghcr packages), or
alternatively gate on the presence of a checksum (pkg.bsum) rather than
pkg.provides; update the surrounding condition (replace "if
pkg.provides.is_some() && !no_verify" with a check like "!no_verify &&
(pkg.bsum.is_some() || pkg.ghcr_pkg.is_some())" or similar) and ensure the
final_checksum calculation and subsequent match still handle None cases and emit
the same SoarEvent::Verifying stages (VerifyStage::Checksum, Failed, Passed)
using the same symbols (pkg.provides.is_some(), no_verify, final_checksum,
downloaded_checksum, pkg.bsum, real_bin, SoarEvent::Verifying) so behavior is
correct for single-binary packages.
In `@crates/soar-operations/src/list.rs`:
- Around line 111-117: The call to CoreRepository::list_filtered in
list_installed is passing None for the is_installed parameter so non-installed
records are returned; update the call in list_installed to pass Some(true) for
the is_installed (the 6th positional argument) so only installed packages are
returned, keeping the other arguments the same and preserving the existing
mapping to InstalledPackage.
In `@crates/soar-operations/src/run.rs`:
- Around line 184-215: After extracting files from extract_dir into extracted,
add an explicit check that if output_path does not exist and extracted.len() !=
1 then return a clear error (e.g. anyhow::bail or Err(anyhow!())) indicating the
expected package binary was not found in the archive and include the list of
extracted file names for diagnostics; update the block around
extract_dir/output_path handling (symbols: extract_dir, extracted, output_path)
so that calculate_checksum(&output_path) is only reached when output_path exists
or has been deterministically set (the check should run before calling
calculate_checksum).
In `@crates/soar-operations/src/update.rs`:
- Around line 231-280: check_local_update computes an update_toml URL but
assigns it to `_update_toml_url`, so perform_update's url_tracking and
ultimately PackagesConfig::update_package never get the new URL; modify the flow
to propagate that computed URL: add a url (Option<String>) field to UpdateInfo
(or reuse an existing field) and set it where check_local_update currently
returns (v, url, size, toml_url), stop discarding toml_url (remove underscore),
have perform_update read UpdateInfo.url and populate url_tracking with that
Option, and ensure PackagesConfig::update_package receives and persists the URL
when present.
- Around line 281-295: The call chain from ReleaseSource::from_resolved ->
release_source.resolve() (which calls resolve_github/resolve_gitlab ->
fetch_with_fallback -> SHARED_AGENT.get().call()) performs blocking HTTP I/O and
is currently invoked from check_local_update() inside the async check_updates()
path; to avoid blocking the tokio runtime, call check_local_update() via
tokio::task::spawn_blocking() (or alternatively convert the resolve/fetch chain
to async) so the blocking SHARED_AGENT.get().call() runs on a blocking thread;
update the call sites in check_updates() (and any other async callers) to await
the spawn_blocking handle and propagate errors accordingly.
- Around line 51-71: The installed-package query in update.rs incorrectly uses
query.version.as_deref() (via CoreRepository::list_filtered) which only finds
installs already at the target version; change the installed lookup (the code
that populates installed_pkgs) to pass None for the version parameter (i.e., do
not filter installed packages by query.version) and then apply query.version
when looking up available/target packages for update checks (mirror the pattern
in install.rs where installed variants are listed without a version and the
version filter is applied when querying available packages).
🧹 Nitpick comments (8)
crates/soar-operations/src/context.rs (2)
145-153:DecompressingandWritingDatabasestages are emitted back-to-back before the DB write.Both events fire immediately before the
match contentblock. If the intent is thatDecompressingrepresents a distinct phase (e.g., decompressing the fetched payload), it should have its own work between these two emissions. Currently they are effectively no-ops from a progress standpoint — the user sees "Decompressing" flash to "WritingDatabase" instantly. Iffetch_metadataalready decompresses,Decompressingshould be emitted inside the fetch task (lines 122–131) rather than here.
29-47: Parent directory creation is implicitly handled byfetch_metadata, not byhandle_json_metadataitself.The original concern is technically valid:
handle_json_metadatadoesn't explicitly create the parent directory before callingDbConnection::open. However, the parent directory is guaranteed to exist becausefetch_metadataalways callsfs::create_dir_all(&repo_path)before returning the metadata content. Even whenhandle_json_metadataremoves the existing metadata file, the parent directory persists.That said, the code is fragile—it relies on implicit assumptions about prior operations. For defensive programming, consider explicitly creating the parent directory in
handle_json_metadata(or in the calling code before bothwrite_metadata_dbandhandle_json_metadatabranches).crates/soar-operations/src/list.rs (1)
123-137: Side-effectingtotal_sizemutation inside.map()— preferfoldor a loop.Mutating
total_sizeinside.map()works because the iterator is sequential, but it's idiomatic Rust to use a loop orfoldfor accumulation with side effects. This also avoids confusion if someone later converts this to a parallel iterator.crates/soar-operations/src/utils.rs (2)
266-273: Recursive executable discovery returns the first arbitrary ELF as a last resort.Line 273 (
Ok(all_files.into_iter().next())) will pick whichever ELF file the filesystem iteration returns first, which is non-deterministic. This is the final fallback after all name-matching heuristics, so it's understandable, but could lead to surprising behavior (e.g., symlinking a library instead of the intended binary).Consider logging a warning when falling back to an arbitrary executable, so users can diagnose unexpected behavior.
45-53: Removeasyncfrommangle_package_symlinks— it contains no.awaitcalls.The function body uses only synchronous operations (
fs::read_dir,fs::metadata,fs::remove_file,unix::fs::symlink), making theasynckeyword unnecessary. Change the signature tofnand update all call sites to remove.await.Proposed change
-pub async fn mangle_package_symlinks( +pub fn mangle_package_symlinks( install_dir: &Path, bin_dir: &Path, provides: Option<&[PackageProvide]>, pkg_name: &str, version: &str, entrypoint: Option<&str>, binaries: Option<&[BinaryMapping]>, ) -> SoarResult<Vec<(PathBuf, PathBuf)>> {Update call sites in
crates/soar-operations/src/switch.rs:100,crates/soar-operations/src/install.rs:950,crates/soar-cli/src/use.rs:88, andcrates/soar-cli/src/install.rs:1408to remove.await?.crates/soar-operations/src/install.rs (3)
637-643:remove_old_versionserrors are silently discarded withlet _ =.Both the success path (line 637) and the warning path (line 643) discard errors from
remove_old_versions. If cleanup fails (e.g., permission error), stale package directories will silently accumulate on disk with no user feedback. Consider at least logging a warning.Proposed fix
- let _ = remove_old_versions(&target.package, &db, false); + if let Err(e) = remove_old_versions(&target.package, &db, false) { + warn!( + pkg_name = target.package.pkg_name, + error = %e, + "failed to remove old versions" + ); + }
682-684:Arc::try_unwrap().unwrap()and.into_inner().unwrap()can panic on poisoned mutex.After all handles are joined,
try_unwrapshould succeed since no otherArcclones remain. However,.into_inner().unwrap()will panic if theMutexwas poisoned by a panic inside a task. While task panics are caught byJoinHandleand surfaced as errors at line 677-679 (causing early return), a panic during a.lock().unwrap().push(...)call (e.g., OOM) could theoretically poison the mutex and have the handle still return Ok. Consider using.into_inner().unwrap_or_else(|e| e.into_inner())to recover from poisoning.
160-312: Significant code duplication inresolve_all_variants— two near-identical metadata queries.Lines 166-205 and 228-267 both build a
Vec<Package>by querying metadata withfind_filtered, mapping results toPackage, and settingrepo_name. The only difference is the filter parameters (query.namevsNonefor pkg_name, andNonevstarget_pkg_idfor pkg_id). Consider extracting a helper likequery_packages(metadata_mgr, repo_name, pkg_name, pkg_id)to reduce duplication. This pattern repeats acrossresolve_by_pkg_idandfind_packagesas well.
| let packages: Vec<InstalledPackage> = diesel_db | ||
| .with_conn(|conn| { | ||
| CoreRepository::list_filtered(conn, repo_name, None, None, None, None, None, None, None) | ||
| })? | ||
| .into_iter() | ||
| .map(Into::into) | ||
| .collect(); |
There was a problem hiding this comment.
list_installed does not filter by is_installed = true.
CoreRepository::list_filtered is called with is_installed: None (the 6th positional argument), so non-installed / broken records are also returned. For a function named list_installed, this likely should pass Some(true) to only return actually-installed packages.
// Signature: list_filtered(conn, repo_name, pkg_name, pkg_id, version, is_installed, pinned, limit, sort_by_id)Proposed fix
let packages: Vec<InstalledPackage> = diesel_db
.with_conn(|conn| {
- CoreRepository::list_filtered(conn, repo_name, None, None, None, None, None, None, None)
+ CoreRepository::list_filtered(conn, repo_name, None, None, None, Some(true), None, None, None)
})?🤖 Prompt for AI Agents
In `@crates/soar-operations/src/list.rs` around lines 111 - 117, The call to
CoreRepository::list_filtered in list_installed is passing None for the
is_installed parameter so non-installed records are returned; update the call in
list_installed to pass Some(true) for the is_installed (the 6th positional
argument) so only installed packages are returned, keeping the other arguments
the same and preserving the existing mapping to InstalledPackage.
There was a problem hiding this comment.
No. We track is_installed for even broken packages and would like to see if some packages were partially installed or broken on info. so None is valid..
There was a problem hiding this comment.
@QaidVoid, thank you for the clarification! I understand now that list_installed intentionally includes broken and partially installed packages by passing None for the is_installed parameter. This allows you to track and display information about packages that may be in an incomplete or broken state, which is valuable for diagnostics and troubleshooting.
I apologize for the incorrect suggestion.
✏️ Learnings added
Learnt from: QaidVoid
Repo: pkgforge/soar PR: 157
File: crates/soar-operations/src/list.rs:111-117
Timestamp: 2026-02-14T06:08:17.939Z
Learning: In crates/soar-operations/src/list.rs, the `list_installed` function intentionally passes `None` for the `is_installed` parameter to `CoreRepository::list_filtered`. This is to include broken and partially installed packages in the results, allowing users to see packages that may be in an incomplete or broken state for diagnostics.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: QaidVoid
Repo: pkgforge/soar PR: 137
File: crates/soar-cli/src/apply.rs:149-153
Timestamp: 2026-01-09T16:14:21.516Z
Learning: In crates/soar-cli/src/apply.rs, for URL packages, passing non-installed records (is_installed=false) as existing_install is intentional. The query filters by repo_name, pkg_name, and pkg_id, ensuring the record is for the same package. This allows resuming partial installs and preserving settings (portable paths, profile) from previous installs.
Learnt from: QaidVoid
Repo: pkgforge/soar PR: 154
File: crates/soar-cli/src/install.rs:1096-1096
Timestamp: 2026-02-11T16:23:54.342Z
Learning: In crates/soar-cli/src/install.rs, FileLock uses only pkg_name (not a composite key with repo_name/pkg_id) intentionally. Different package variants with the same pkg_name but different pkg_id or repo_name will symlink to the same binary names in the bin directory. Locking at the pkg_name level prevents symlink race conditions during concurrent installations of different variants.
| if let Some(packages) = packages { | ||
| for package in packages { | ||
| let query = PackageQuery::try_from(package.as_str())?; | ||
|
|
||
| let installed_pkgs: Vec<InstalledPackage> = diesel_db | ||
| .with_conn(|conn| { | ||
| CoreRepository::list_filtered( | ||
| conn, | ||
| query.repo_name.as_deref(), | ||
| query.name.as_deref(), | ||
| query.pkg_id.as_deref(), | ||
| query.version.as_deref(), | ||
| Some(true), | ||
| None, | ||
| Some(1), | ||
| Some(SortDirection::Asc), | ||
| ) | ||
| })? | ||
| .into_iter() | ||
| .map(Into::into) | ||
| .collect(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Get the full context of the update.rs file around the mentioned lines
cd crates/soar-operations/src && cat -n update.rs | head -100Repository: pkgforge/soar
Length of output: 4032
🏁 Script executed:
# Check if there's a check_updates function or similar
rg -n 'fn.*check_updates|fn.*update' crates/soar-operations/src/update.rs -A 5Repository: pkgforge/soar
Length of output: 1140
🏁 Script executed:
# Look for how PackageQuery is constructed and used in the codebase
rg -n 'PackageQuery::try_from' --type=rust -B 2 -A 5Repository: pkgforge/soar
Length of output: 7433
🏁 Script executed:
# Check the CLI layer update.rs to see how it uses the operations function
cat -n crates/soar-cli/src/update.rs | head -120Repository: pkgforge/soar
Length of output: 4686
🏁 Script executed:
# Examine the check_repo_update and check_local_update functions to see how they use the installed package
sed -n '132,180p' crates/soar-operations/src/update.rsRepository: pkgforge/soar
Length of output: 1670
🏁 Script executed:
# Check if there are any comments or tests that clarify the expected behavior of version filtering
rg -n 'version.*filter|filter.*version|update.*check' crates/soar-operations/src/update.rs -iRepository: pkgforge/soar
Length of output: 1311
🏁 Script executed:
# Let me check the PackageQuery structure to understand what version represents
rg -n 'struct PackageQuery|impl.*PackageQuery' --type=rust -A 10Repository: pkgforge/soar
Length of output: 1455
🏁 Script executed:
# Check the find_newer_version function signature to understand its parameters
rg -n 'fn find_newer_version' --type=rust -A 5Repository: pkgforge/soar
Length of output: 517
🏁 Script executed:
# Look for any tests or documentation about version filtering behavior in updates
rg -n 'test.*update|#\[test\]' crates/soar-operations/src/update.rs -A 3Repository: pkgforge/soar
Length of output: 39
🏁 Script executed:
# Check if there's a difference in behavior between specifying version vs not specifying
rg -n 'pkg@|version.*specified' --type=rust -iRepository: pkgforge/soar
Length of output: 39
🏁 Script executed:
# Check if there's any documentation or comments about the version filter in update operations
rg -n 'version.*@|@.*version' crates/soar-cli/src/update.rs -B 3 -A 3Repository: pkgforge/soar
Length of output: 39
🏁 Script executed:
# Look at how the remove command handles version filtering for comparison
sed -n '20,50p' crates/soar-operations/src/remove.rsRepository: pkgforge/soar
Length of output: 1143
🏁 Script executed:
# Check the list_filtered function signature to understand its parameters
rg -n 'fn list_filtered' --type=rust -A 15Repository: pkgforge/soar
Length of output: 1260
🏁 Script executed:
# Check examples or tests that show expected behavior for version queries
find . -name '*.rs' -type f | xargs rg -l 'test.*query|example' | head -5Repository: pkgforge/soar
Length of output: 219
🏁 Script executed:
# Let me understand the intent better by checking if there's any usage documentation
cat crates/soar-core/src/package/query.rsRepository: pkgforge/soar
Length of output: 2221
🏁 Script executed:
# Check if the version from query is used anywhere else in the update flow
rg -n 'query.version' crates/soar-operations/src/update.rsRepository: pkgforge/soar
Length of output: 109
🏁 Script executed:
# Compare with install command to see how it handles version filtering
sed -n '65,120p' crates/soar-operations/src/install.rsRepository: pkgforge/soar
Length of output: 1634
🏁 Script executed:
# Check how the resolve_normal function in install.rs uses the version
sed -n '220,280p' crates/soar-operations/src/install.rsRepository: pkgforge/soar
Length of output: 1863
🏁 Script executed:
# Look at what happens when version is specified in a normal flow - check resolve_normal
rg -n 'fn resolve_normal' --type=rust -A 30Repository: pkgforge/soar
Length of output: 2304
🏁 Script executed:
# Check if there's any difference in behavior when version is NOT specified vs when it IS
sed -n '86,130p' crates/soar-operations/src/update.rsRepository: pkgforge/soar
Length of output: 1321
🏁 Script executed:
# Check the full resolve_normal function to see how it handles query.version
sed -n '415,470p' crates/soar-operations/src/install.rsRepository: pkgforge/soar
Length of output: 1901
🏁 Script executed:
# Let me check if check_repo_update or check_local_update do anything special with version
sed -n '198,260p' crates/soar-operations/src/update.rsRepository: pkgforge/soar
Length of output: 2237
Version filter on installed-package query is overly restrictive for update checks.
Line 62 passes query.version.as_deref() to filter installed packages, which only finds packages already at that version. For update checks, the version should filter the target packages, not the installed base. If a user specifies pkg@2.0, the intent is to check if version 2.0 is available as an update, not to find only packages already at 2.0.
Compare with install.rs (line 429), which passes None for the version parameter when looking up installed packages, then filters available packages by version later. The update logic should follow the same pattern: find all installed variants without version filtering, then use version to check for updates to the specified version.
🤖 Prompt for AI Agents
In `@crates/soar-operations/src/update.rs` around lines 51 - 71, The
installed-package query in update.rs incorrectly uses query.version.as_deref()
(via CoreRepository::list_filtered) which only finds installs already at the
target version; change the installed lookup (the code that populates
installed_pkgs) to pass None for the version parameter (i.e., do not filter
installed packages by query.version) and then apply query.version when looking
up available/target packages for update checks (mirror the pattern in install.rs
where installed variants are listed without a version and the version filter is
applied when querying available packages).
| } else { | ||
| let release_source = match ReleaseSource::from_resolved(resolved) { | ||
| Some(s) => s, | ||
| None => { | ||
| warn!("No release source configured for {}", pkg.pkg_name); | ||
| return Ok(None); | ||
| } | ||
| }; | ||
| let release = match release_source.resolve() { | ||
| Ok(r) => r, | ||
| Err(e) => { | ||
| warn!("Failed to check for updates for {}: {}", pkg.pkg_name, e); | ||
| return Ok(None); | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find ReleaseSource struct and resolve method implementation
find . -name "*.rs" -type f | xargs grep -l "struct ReleaseSource\|impl ReleaseSource" | head -5Repository: pkgforge/soar
Length of output: 105
🏁 Script executed:
#!/bin/bash
# Search for resolve method definition in ReleaseSource
rg -n "fn resolve\(" --type=rust -B 5 -A 30 | rg -B 5 -A 30 "impl.*ReleaseSource\|fn resolve"Repository: pkgforge/soar
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Check the check_local_update function signature and context
rg -n "fn check_local_update" --type=rust -B 2 -A 50Repository: pkgforge/soar
Length of output: 3933
🏁 Script executed:
#!/bin/bash
# Find resolve method in ReleaseSource impl
rg -n "impl ReleaseSource" --type=rust -A 200 | head -300Repository: pkgforge/soar
Length of output: 17426
🏁 Script executed:
#!/bin/bash
# Find check_updates function and see if it's async
rg -n "fn check_updates" --type=rust -B 2 -A 3Repository: pkgforge/soar
Length of output: 526
🏁 Script executed:
#!/bin/bash
# Get the complete resolve() method from release_source.rs
cat -n crates/soar-core/src/package/release_source.rs | grep -A 50 "fn resolve"Repository: pkgforge/soar
Length of output: 6549
🏁 Script executed:
#!/bin/bash
# Get the full check_updates function implementation
rg -n "pub async fn check_updates" --type=rust -A 100 | head -150Repository: pkgforge/soar
Length of output: 7339
🏁 Script executed:
#!/bin/bash
# Check if check_local_update is called directly or in any special way
rg -n "check_local_update" --type=rust -B 2 -A 2Repository: pkgforge/soar
Length of output: 1633
🏁 Script executed:
#!/bin/bash
# Find Github::fetch_releases implementation
rg -n "fn fetch_releases" --type=rust -B 2 -A 15Repository: pkgforge/soar
Length of output: 2960
🏁 Script executed:
#!/bin/bash
# Check what crate provides the Github and GitLab types
rg -n "use.*Github\|use.*GitLab" --type=rust | head -20Repository: pkgforge/soar
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Get more context on the Github::fetch_releases implementation to see what HTTP client it uses
cat -n crates/soar-dl/src/github.rs | sed -n '57,100p'Repository: pkgforge/soar
Length of output: 1612
🏁 Script executed:
#!/bin/bash
# Check the http client being used in github.rs
rg -n "use\|import\|client\|request" crates/soar-dl/src/github.rs | head -30Repository: pkgforge/soar
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Find fetch_with_fallback function
rg -n "fn fetch_with_fallback" --type=rust -B 2 -A 20Repository: pkgforge/soar
Length of output: 1713
🏁 Script executed:
#!/bin/bash
# Check imports at the top of github.rs
head -30 crates/soar-dl/src/github.rsRepository: pkgforge/soar
Length of output: 668
Blocking HTTP calls in async context will block the tokio runtime.
release_source.resolve() → resolve_github()/resolve_gitlab() → fetch_with_fallback() → SHARED_AGENT.get().call() performs a blocking HTTP request. This chain is called directly from check_local_update() (lines 75, 113), which is invoked from the async check_updates() function without spawn_blocking(), causing the tokio runtime to be blocked.
Wrap the check_local_update() call in tokio::task::spawn_blocking() or make the entire chain async.
🤖 Prompt for AI Agents
In `@crates/soar-operations/src/update.rs` around lines 281 - 295, The call chain
from ReleaseSource::from_resolved -> release_source.resolve() (which calls
resolve_github/resolve_gitlab -> fetch_with_fallback ->
SHARED_AGENT.get().call()) performs blocking HTTP I/O and is currently invoked
from check_local_update() inside the async check_updates() path; to avoid
blocking the tokio runtime, call check_local_update() via
tokio::task::spawn_blocking() (or alternatively convert the resolve/fetch chain
to async) so the blocking SHARED_AGENT.get().call() runs on a blocking thread;
update the call sites in check_updates() (and any other async callers) to await
the spawn_blocking handle and propagate errors accordingly.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@crates/soar-operations/src/install.rs`:
- Line 764: Replace the direct call to the global get_config() with the
context-based accessor ctx.config() to ensure the function respects custom
configuration in SoarContext; locate the line using get_config() (the one added
as let config = get_config();) and change it to obtain config via ctx.config(),
and update any subsequent uses in that scope to use the resulting config
variable so behavior matches other usages (e.g., the code around lines using
ctx.config() at 579 and 962).
In `@crates/soar-operations/src/run.rs`:
- Around line 184-229: When extract_dir exists but yields no files the code
silently returns Ok which later causes calculate_checksum(&output_path) to fail;
update the block handling extracted files (the loop that builds the extracted
Vec and the subsequent checks) to detect if extracted.is_empty() and return a
clear SoarError::Custom describing that the archive unpacked zero files (include
extract_dir.display() or the expected output_path file name in the message).
Place this check after fs::remove_dir_all(&extract_dir).ok() and before the
existing if !output_path.exists() branch so callers get a meaningful error
instead of a file-not-found in calculate_checksum.
🧹 Nitpick comments (7)
crates/soar-operations/src/context.rs (1)
145-168:DecompressingandWritingDatabasestages are emitted back-to-back before the actual write.Both
SyncStage::Decompressing(line 147) andSyncStage::WritingDatabase(line 152) are emitted consecutively before the metadata write at lines 155–163. TheDecompressingstage never has a visible phase where it's the "current" stage from the user's perspective — it is immediately followed byWritingDatabase. If decompression actually occurs insidefetch_metadata(in the spawned task), theDecompressingevent should be emitted there instead, or removed here to avoid a misleading UI sequence.crates/soar-operations/src/update.rs (1)
469-469: Cleanup error fromremove_old_versionsis silently discarded.
let _ = remove_old_versions(...)drops any error. If old version cleanup fails (e.g., permission denied), the user has no indication that stale files remain. Consider at least logging a warning.Proposed improvement
- let _ = remove_old_versions(pkg, &diesel_db, false); + if let Err(e) = remove_old_versions(pkg, &diesel_db, false) { + warn!( + "Failed to remove old versions for {}#{}: {}", + pkg.pkg_name, pkg.pkg_id, e + ); + }crates/soar-operations/src/install.rs (3)
682-684: Tripleunwrap()chain onArc::try_unwrap+Mutex::into_inneris safe here but fragile.After joining all handles, these
Arcs should have exactly one strong reference remaining, makingtry_unwrapsafe. Similarly, the mutex can't be poisoned unless a task panicked, which would have caused the join at lines 676–680 to propagate the error first. However, if future refactoring introduces additional clones or changes error handling, these will become silent panics.Defensive alternative
- let installed = Arc::try_unwrap(installed).unwrap().into_inner().unwrap(); - let failed = Arc::try_unwrap(failed).unwrap().into_inner().unwrap(); - let warnings = Arc::try_unwrap(warnings).unwrap().into_inner().unwrap(); + let installed = Arc::try_unwrap(installed) + .map_err(|_| SoarError::Custom("internal: outstanding Arc references".into()))? + .into_inner() + .map_err(|e| SoarError::Custom(format!("internal: mutex poisoned: {e}")))?; + let failed = Arc::try_unwrap(failed) + .map_err(|_| SoarError::Custom("internal: outstanding Arc references".into()))? + .into_inner() + .map_err(|e| SoarError::Custom(format!("internal: mutex poisoned: {e}")))?; + let warnings = Arc::try_unwrap(warnings) + .map_err(|_| SoarError::Custom("internal: outstanding Arc references".into()))? + .into_inner() + .map_err(|e| SoarError::Custom(format!("internal: mutex poisoned: {e}")))?;
637-643: Cleanup errors fromremove_old_versionssilently discarded in both success and warning paths.Lines 637 and 643 both use
let _ = remove_old_versions(...). If cleanup fails, stale versions remain with no user indication. Consider logging a warning, consistent with the recommendation inupdate.rs.Proposed fix
- let _ = remove_old_versions(&target.package, &db, false); + if let Err(e) = remove_old_versions(&target.package, &db, false) { + warn!( + "Failed to remove old versions for {}#{}: {}", + target.package.pkg_name, target.package.pkg_id, e + ); + }
873-896: Signature verification is silently skipped with only awarn!log when pubkey is missing.When
signature_verification()is enabled for a repository but theminisign.pubfile doesn't exist (line 884), the package is installed without signature verification. This degrades security posture silently. For a repository that explicitly opts into signature verification, a missing pubkey should arguably be an error rather than a warning.crates/soar-operations/src/types.rs (2)
128-146:SearchEntryandPackageListEntryare structurally identical.Both contain
package: Packageandinstalled: bool. Consider using a single type (or a type alias) to reduce duplication, unless you anticipate them diverging in the future.
11-25: Consider adding common derives (Debug,Clone) to public types.Most public structs lack
DebugandClonederives. For a new public API crate, missing derives can force breaking changes later when consumers need to debug-print or clone these types. Types likeAmbiguousPackage,InstallReport,RemoveReport,UpdateReport,HealthReport,ApplyReport,RunResult, andVariantInfowould benefit from at leastDebug.Also applies to: 45-48, 51-55, 89-99, 103-111, 113-124, 162-173, 204-209, 218-220, 224-227
…gnostic operations (pkgforge#157) ⌚
Summary by CodeRabbit