diff --git a/README.md b/README.md index c61cc66..75c883d 100644 --- a/README.md +++ b/README.md @@ -54,3 +54,5 @@ If you want to check it out, you can opt into it with `-Dklint::atomic_context`. * [`build_error` checks](doc/build_error.md) * [Stack frame size check](doc/stack_size.md) * [Prelude check](doc/not_using_prelude.md) +* [`build_assert` not inlined](doc/build_assert_not_inlined.md) +* [`build_assert` can be const](doc/build_assert_can_be_const.md) diff --git a/doc/build_assert_can_be_const.md b/doc/build_assert_can_be_const.md new file mode 100644 index 0000000..fc5fcef --- /dev/null +++ b/doc/build_assert_can_be_const.md @@ -0,0 +1,87 @@ + + +# `build_assert_can_be_const` + +This lint warns when a `build_assert!` condition is already effectively constant and can therefore +be written as a const assertion instead: + +```rust +const { + assert!(OFFSET < N, "offset must stay in bounds"); +} +``` + +`build_assert!` is meant for conditions that cannot be checked in a plain const context, such as +conditions depending on function arguments that need to be optimized through an inline call chain. +If the condition does not depend on runtime values, using a const assert is clearer and fails +earlier. + +## Literal and const-only cases + +These trigger the lint because the assertion is already constant: + +```rust +fn literal_const_only() { + build_assert!(1 < LIMIT); +} +``` + +```rust +fn const_only_direct() { + build_assert!(OFFSET < N, "offset must stay in bounds"); +} +``` + +## Wrapper macros + +Simple wrapper macros do not hide the const-only case: + +```rust +macro_rules! forward_build_assert { + ($cond:expr) => { + build_assert!($cond); + }; +} + +fn const_only_wrapper() { + forward_build_assert!(OFFSET < LIMIT); +} +``` + +## Local const-only helpers + +The lint also tracks local helper return values: + +```rust +fn helper() -> usize { + N - 1 +} + +fn const_only_helper() { + build_assert!(helper::() < N); +} +``` + +Because the helper result still depends only on compile-time values, this should also use a const +assert instead of `build_assert!`. + +## Runtime-dependent cases + +These do not trigger `build_assert_can_be_const`: + +```rust +fn runtime_direct(offset: usize, n: usize) { + build_assert!(offset < n); +} +``` + +```rust +fn runtime_param_const_generic(offset: usize) { + build_assert!(offset < N); +} +``` + +Those cases are the domain of [`build_assert_not_inlined`](build_assert_not_inlined.md), which +checks whether the non-constant assertion still has the required `#[inline(always)]` call chain. diff --git a/doc/build_assert_not_inlined.md b/doc/build_assert_not_inlined.md new file mode 100644 index 0000000..b90a701 --- /dev/null +++ b/doc/build_assert_not_inlined.md @@ -0,0 +1,187 @@ + + +# `build_assert_not_inlined` + +This lint warns when a `build_assert!` condition depends on non-static values, but the function +containing that dependency is not marked `#[inline(always)]`. + +`build_assert!` is only valid when the compiler can optimize away its error path. Const-only uses +do not need forced inlining, but once the condition depends on values flowing through a function +boundary, the surrounding call chain must stay inlineable. + +## Const-only and const-generic cases + +These do not trigger the lint because the condition is already effectively constant: + +```rust +fn literal_const_only() { + build_assert!(1 < 2); +} + +fn const_only_direct() { + build_assert!(OFFSET < N); +} + +fn const_only_wrapper() { + helper_macro!(OFFSET < LIMIT); +} +``` + +These cases are covered by the separate +[`build_assert_can_be_const`](build_assert_can_be_const.md) lint, which suggests replacing +`build_assert!` with `const { assert!(...) }`. + +## Runtime-dependent parameter flow + +This does trigger the lint: + +```rust +fn runtime_direct(offset: usize, n: usize) { + build_assert!(offset < n); +} +``` + +The same applies when only part of the condition is dynamic: + +```rust +fn runtime_param_const_generic(offset: usize) { + build_assert!(offset < N); +} +``` + +## Local helper return-value flow + +The lint tracks values through local helpers instead of treating every helper call as opaque: + +```rust +fn passthrough(x: usize) -> usize { + x +} + +fn runtime_helper_call(offset: usize) { + build_assert!(passthrough(offset) < N); +} +``` + +By contrast, helpers that return only const-derived values do not trigger the lint: + +```rust +fn const_helper() -> usize { + N - 1 +} + +fn const_only_helper_call() { + build_assert!(const_helper::() < N); +} +``` + +## Wrapper macros + +The lint identifies `build_assert!` through macro ancestry, so simple wrapper macros do not hide +the dependency: + +```rust +macro_rules! helper_macro { + ($cond:expr) => { + build_assert!($cond); + }; +} + +fn runtime_wrapper(offset: usize, n: usize) { + helper_macro!(offset < n); +} +``` + +## Function pointers + +The analysis also handles function pointers when it can resolve the local target: + +```rust +fn runtime_fnptr_target(offset: usize) { + runtime_direct(offset, LIMIT); +} + +fn fn_pointer_entry(offset: usize) { + let f: fn(usize) = runtime_fnptr_target; + f(offset); +} +``` + +Const-only calls through function pointers stay quiet: + +```rust +fn fn_pointer_const_entry() { + let f: fn(usize) = runtime_fnptr_target; + f(1); +} +``` + +## Dynamic dispatch + +The lint uses monomorphized use edges to recover dyn-dispatch callsites: + +```rust +trait RuntimeDispatch { + fn run(&self, offset: usize); +} + +trait ConstRuntimeDispatch { + fn run(&self); +} + +impl RuntimeDispatch for RuntimeChecker { + fn run(&self, offset: usize) { + runtime_direct(offset, LIMIT); + } +} + +impl ConstRuntimeDispatch for ConstRuntimeChecker { + fn run(&self) { + build_assert!(OFFSET < LIMIT); + } +} + +fn dyn_dispatch_entry(offset: usize) { + let checker: &dyn RuntimeDispatch = &RuntimeChecker; + checker.run(offset); +} + +fn dyn_dispatch_ambiguous_names(offset: usize) { + let runtime_checker: &dyn RuntimeDispatch = &RuntimeChecker; + let const_checker: &dyn ConstRuntimeDispatch = &ConstRuntimeChecker; + const_checker.run(); + runtime_checker.run(offset); +} +``` + +This also shows the ambiguous same-name trait-method case: a const-only `run()` method does not +hide the runtime-dependent `run(offset)` call. + +## Propagation to callers + +The lint is not limited to the function that directly contains `build_assert!`. If a callee's +`build_assert!` still depends on caller-provided values, the requirement propagates upward: + +```rust +fn runtime_direct(offset: usize, n: usize) { + build_assert!(offset < n); +} + +fn runtime_caller(offset: usize, n: usize) { + runtime_direct(offset, n); +} +``` + +Both functions should be `#[inline(always)]`. + +If a caller passes only effectively constant values, propagation stops there: + +```rust +fn runtime_entry() { + runtime_direct(1, 4); +} +``` + +This does not trigger the lint. diff --git a/src/build_assert.rs b/src/build_assert.rs new file mode 100644 index 0000000..f2cf5f8 --- /dev/null +++ b/src/build_assert.rs @@ -0,0 +1,1048 @@ +// SPDX-License-Identifier: MIT OR Apache-2.0 + +use rustc_ast::Mutability; +use rustc_data_structures::fx::{FxHashMap, FxHashSet}; +use rustc_hir::def::{DefKind, Res}; +use rustc_hir::def_id::{DefId, LocalDefId}; +use rustc_hir::intravisit as hir_visit; +use rustc_hir::{Body, Expr, HirId, QPath, Stmt, StmtKind, UnOp}; +use rustc_lint::{LateContext, LateLintPass}; +use rustc_middle::ty::{TyCtxt, TypeckResults}; +use rustc_session::impl_lint_pass; +use rustc_span::Span; + +use crate::build_assert_can_be_const::{BUILD_ASSERT_CAN_BE_CONST, emit_build_assert_can_be_const}; +use crate::build_assert_not_inlined::{ + BUILD_ASSERT_NOT_INLINED, emit_build_assert_not_inlined, has_inline_always, +}; +use crate::ctxt::AnalysisCtxt; +use crate::mono_graph::{CallableTargets, IndirectCandidates, collect_indirect_candidates}; + +#[derive(Clone, Copy, PartialEq, Eq)] +pub struct BuildAssertCondition { + /// Span of the original `build_assert!(...)` invocation in source. + pub call_site: Span, + /// Span of the first macro argument, i.e. the asserted condition. + pub condition_span: Span, +} + +#[derive(Clone, Default, PartialEq, Eq)] +pub enum ExprDependency { + #[default] + Constant, + Param(FxHashSet), + Runtime, +} + +impl ExprDependency { + /// Record that an expression depends on one specific function parameter. + pub fn param(index: usize) -> Self { + let mut params = FxHashSet::default(); + params.insert(index); + Self::Param(params) + } + + /// Merge dependencies from subexpressions. Any runtime component dominates; otherwise we keep + /// the union of parameter indices that still matter to the value. + pub fn combine(dependencies: I) -> Self + where + I: IntoIterator, + { + let mut params = FxHashSet::default(); + + for dependency in dependencies { + match dependency { + ExprDependency::Constant => {} + ExprDependency::Param(dep_params) => params.extend(dep_params), + ExprDependency::Runtime => return ExprDependency::Runtime, + } + } + + if params.is_empty() { + ExprDependency::Constant + } else { + ExprDependency::Param(params) + } + } +} + +#[derive(Clone, Copy, PartialEq, Eq)] +pub(crate) enum RequirementOrigin { + Direct { span: Span }, + Propagated { callee: LocalDefId, call_span: Span }, +} + +type FunctionSummaries = FxHashMap; + +#[derive(Clone, Default, PartialEq, Eq)] +pub(crate) struct RequirementSummary { + pub(crate) param_dependencies: FxHashSet, + has_local_runtime_dependency: bool, + pub(crate) origin: Option, +} + +#[derive(Clone, Default, PartialEq, Eq)] +pub(crate) struct FunctionSummary { + pub(crate) requirement: RequirementSummary, + return_dependency: ExprDependency, + pub(crate) const_only_build_asserts: Vec, +} + +impl RequirementSummary { + /// The inline requirement matters only when some non-constant value still flows into + /// `build_assert!`, either directly or through a caller. + pub(crate) fn requires_inline(&self) -> bool { + self.has_local_runtime_dependency || !self.param_dependencies.is_empty() + } + + /// Record a direct `build_assert!` use in this body. Constant assertions stay quiet; anything + /// else seeds the later caller propagation. + fn record_direct_use(&mut self, dependency: ExprDependency, span: Span) { + match dependency { + ExprDependency::Constant => {} + ExprDependency::Param(params) => { + self.param_dependencies.extend(params); + self.origin + .get_or_insert(RequirementOrigin::Direct { span }); + } + ExprDependency::Runtime => { + self.has_local_runtime_dependency = true; + self.origin + .get_or_insert(RequirementOrigin::Direct { span }); + } + } + } + + /// Record that this function inherits the inline requirement from a callee after mapping the + /// callee's relevant parameters onto the actual callsite arguments. + fn record_propagated_use(&mut self, dependency: ExprDependency, origin: RequirementOrigin) { + match dependency { + ExprDependency::Constant => {} + ExprDependency::Param(params) => { + if !params.is_empty() { + self.param_dependencies.extend(params); + self.origin.get_or_insert(origin); + } + } + ExprDependency::Runtime => { + self.has_local_runtime_dependency = true; + self.origin.get_or_insert(origin); + } + } + } +} + +impl FunctionSummary { + fn record_const_only_build_assert(&mut self, span: Span) { + self.const_only_build_asserts.push(span); + } +} + +fn build_assert_call_site( + tcx: TyCtxt<'_>, + span: Span, + build_assert: Option, +) -> Option { + // Match by diagnostic item first, then by macro name as a compatibility fallback for older + // trees where the explicit annotation may not exist yet. + span.macro_backtrace() + .find(|expn_data| { + let Some(macro_def_id) = expn_data.macro_def_id else { + return false; + }; + + Some(macro_def_id) == build_assert + || tcx.item_name(macro_def_id) == crate::symbol::build_assert + }) + .map(|expn_data| expn_data.call_site.source_callsite()) +} + +pub fn build_assert_condition( + tcx: TyCtxt<'_>, + expr: &Expr<'_>, + build_assert: Option, +) -> Option { + // Recover the asserted condition from the expanded HIR shape of `build_assert!` itself: + // the macro body contributes the outer `!`, while the operand span still points at the + // user's original condition expression. + let rustc_hir::ExprKind::Unary(UnOp::Not, condition) = expr.kind else { + return None; + }; + if !expr.span.from_expansion() { + return None; + } + + let call_site = build_assert_call_site(tcx, expr.span, build_assert)?; + let condition_span = condition.span.source_callsite(); + Some(BuildAssertCondition { + call_site, + condition_span, + }) +} + +fn is_reportable_fn(tcx: TyCtxt<'_>, def_id: LocalDefId) -> bool { + matches!(tcx.def_kind(def_id), DefKind::Fn | DefKind::AssocFn) +} + +#[derive(Default)] +struct ScopeFrame { + bindings: Vec<(HirId, Option)>, +} + +#[derive(Clone, Default)] +struct LocalBinding { + dependency: ExprDependency, + callables: CallableTargets, +} + +struct LocalEnv { + bindings: FxHashMap, + scopes: Vec, +} + +impl LocalEnv { + fn new() -> Self { + Self { + bindings: FxHashMap::default(), + scopes: vec![ScopeFrame::default()], + } + } + + fn enter_scope(&mut self) { + self.scopes.push(ScopeFrame::default()); + } + + fn exit_scope(&mut self) { + let frame = self.scopes.pop().expect("scope underflow"); + + for (hir_id, old) in frame.bindings.into_iter().rev() { + if let Some(old) = old { + self.bindings.insert(hir_id, old); + } else { + self.bindings.remove(&hir_id); + } + } + } + + fn update_binding(&mut self, hir_id: HirId, f: impl FnOnce(&mut LocalBinding)) { + let mut binding = self.binding(hir_id).cloned().unwrap_or_default(); + f(&mut binding); + let old = self.bindings.insert(hir_id, binding); + self.scopes + .last_mut() + .expect("root scope should always be present") + .bindings + .push((hir_id, old)); + } + + fn binding(&self, hir_id: HirId) -> Option<&LocalBinding> { + self.bindings.get(&hir_id) + } + + fn get_dependency(&self, hir_id: HirId) -> Option<&ExprDependency> { + self.binding(hir_id).map(|binding| &binding.dependency) + } + + fn get_callables(&self, hir_id: HirId) -> Option<&CallableTargets> { + self.binding(hir_id).map(|binding| &binding.callables) + } + + fn bind_dependency(&mut self, hir_id: HirId, dependency: ExprDependency) { + self.update_binding(hir_id, |binding| binding.dependency = dependency); + } + + fn bind_callables(&mut self, hir_id: HirId, targets: CallableTargets) { + self.update_binding(hir_id, |binding| binding.callables = targets); + } + + fn clear_callables(&mut self, hir_id: HirId) { + self.update_binding(hir_id, |binding| binding.callables.clear()); + } + + fn bind_runtime_pattern(&mut self, pat: &rustc_hir::Pat<'_>) { + pat.each_binding(|_, hir_id, _, _| { + self.bind_dependency(hir_id, ExprDependency::Runtime); + }); + } +} + +struct SummaryContext<'a, 'tcx> { + tcx: TyCtxt<'tcx>, + owner: LocalDefId, + typeck: &'a TypeckResults<'tcx>, + build_assert: Option, + callee_summaries: &'a FunctionSummaries, + indirect_candidates: &'a IndirectCandidates, +} + +struct SummaryState { + env: LocalEnv, + return_dependencies: Vec, + build_assert_conditions: FxHashMap, + seen_build_assert_callsites: FxHashSet, + summary: FunctionSummary, +} + +struct SummaryAnalyzer<'a, 'tcx> { + cx: SummaryContext<'a, 'tcx>, + state: SummaryState, +} + +enum ResolvedCall { + Local(LocalDefId), + NonLocalConst, + Other, +} + +impl<'a, 'tcx> SummaryAnalyzer<'a, 'tcx> { + /// Seed the local environment with parameter dependencies so later expression evaluation can + /// distinguish const-only values from values that still depend on caller inputs. + fn new( + tcx: TyCtxt<'tcx>, + owner: LocalDefId, + typeck: &'a TypeckResults<'tcx>, + build_assert: Option, + callee_summaries: &'a FunctionSummaries, + indirect_candidates: &'a IndirectCandidates, + body: &'tcx Body<'tcx>, + ) -> Self { + let mut analyzer = Self { + cx: SummaryContext { + tcx, + owner, + typeck, + build_assert, + callee_summaries, + indirect_candidates, + }, + state: SummaryState { + env: LocalEnv::new(), + return_dependencies: Vec::new(), + build_assert_conditions: FxHashMap::default(), + seen_build_assert_callsites: FxHashSet::default(), + summary: FunctionSummary::default(), + }, + }; + + for (param_index, param) in body.params.iter().enumerate() { + param.pat.each_binding(|_, hir_id, _, _| { + analyzer + .state + .env + .bind_dependency(hir_id, ExprDependency::param(param_index)); + }); + } + + analyzer + } + + /// Finalize one body's summary after visiting all explicit returns and the tail expression. + fn finish_summary(mut self, body: &'tcx Body<'tcx>) -> FunctionSummary { + let body_dependency = self.expr_dependency(body.value); + self.state.return_dependencies.push(body_dependency); + self.state.summary.return_dependency = + ExprDependency::combine(self.state.return_dependencies); + self.state.summary + } + + fn with_scope(&mut self, f: impl FnOnce(&mut Self) -> R) -> R { + self.state.env.enter_scope(); + let result = f(self); + self.state.env.exit_scope(); + result + } + + /// Resolve the set of local functions represented by an expression when it is used as a + /// callable value. This is what lets the lint follow function pointers precisely enough for + /// same-body value flow. + fn expr_callable_targets(&self, expr: &'tcx Expr<'tcx>) -> CallableTargets { + match expr.kind { + rustc_hir::ExprKind::Path(ref qpath) => { + match self.cx.typeck.qpath_res(qpath, expr.hir_id) { + Res::Local(local) => self + .state + .env + .get_callables(local) + .cloned() + .unwrap_or_default(), + Res::Def(DefKind::Fn | DefKind::AssocFn, def_id) => def_id + .as_local() + .filter(|&def_id| is_reportable_fn(self.cx.tcx, def_id)) + .into_iter() + .collect(), + _ => FxHashSet::default(), + } + } + rustc_hir::ExprKind::Use(inner, _) + | rustc_hir::ExprKind::Cast(inner, _) + | rustc_hir::ExprKind::Type(inner, _) + | rustc_hir::ExprKind::DropTemps(inner) + | rustc_hir::ExprKind::AddrOf(_, _, inner) => self.expr_callable_targets(inner), + rustc_hir::ExprKind::Block(block, _) => block + .expr + .map(|expr| self.expr_callable_targets(expr)) + .unwrap_or_default(), + _ => FxHashSet::default(), + } + } + + /// Indirect targets are resolved once up front from the mono graph and then consumed by HIR + /// callsite id, so the summary pass does not need to know about mono items or span matching. + fn indirect_targets_for_callsite(&self, hir_id: HirId) -> CallableTargets { + let Some(candidates) = self.cx.indirect_candidates.get(&self.cx.owner) else { + return FxHashSet::default(); + }; + candidates.get(&hir_id).cloned().unwrap_or_default() + } + + fn bind_pattern( + &mut self, + pat: &'tcx rustc_hir::Pat<'tcx>, + dependency: ExprDependency, + targets: CallableTargets, + ) { + pat.each_binding(|_, hir_id, _, _| { + self.state.env.bind_dependency(hir_id, dependency.clone()); + if targets.is_empty() { + self.state.env.clear_callables(hir_id); + } else { + self.state.env.bind_callables(hir_id, targets.clone()); + } + }); + } + + fn set_callable_targets(&mut self, hir_id: HirId, targets: CallableTargets) { + if targets.is_empty() { + self.state.env.clear_callables(hir_id); + } else { + self.state.env.bind_callables(hir_id, targets); + } + } + + fn combine_exprs(&mut self, exprs: I) -> ExprDependency + where + I: IntoIterator>, + { + ExprDependency::combine(exprs.into_iter().map(|expr| self.expr_dependency(expr))) + } + + fn project_param_dependencies( + &self, + actual_args: &[ExprDependency], + params: &FxHashSet, + ) -> ExprDependency { + ExprDependency::combine(params.iter().map(|¶m_index| { + actual_args + .get(param_index) + .cloned() + .unwrap_or(ExprDependency::Runtime) + })) + } + + fn local_fn_def_from_res(&self, res: Res) -> Option { + match res { + Res::Def(DefKind::Fn | DefKind::AssocFn, def_id) => def_id.as_local(), + _ => None, + } + } + + fn resolve_direct_call(&self, callee: &'tcx Expr<'tcx>) -> ResolvedCall { + let rustc_hir::ExprKind::Path(ref qpath) = callee.kind else { + return ResolvedCall::Other; + }; + let resolved = self.cx.typeck.qpath_res(qpath, callee.hir_id); + + if let Some(local_def_id) = self.local_fn_def_from_res(resolved) + && is_reportable_fn(self.cx.tcx, local_def_id) + { + return ResolvedCall::Local(local_def_id); + } + + if let Res::Def(DefKind::Fn | DefKind::AssocFn, def_id) = resolved + && self.cx.tcx.is_const_fn(def_id) + { + return ResolvedCall::NonLocalConst; + } + + ResolvedCall::Other + } + + fn resolve_method_call(&self, expr: &'tcx Expr<'tcx>) -> ResolvedCall { + let Some(def_id) = self.cx.typeck.type_dependent_def_id(expr.hir_id) else { + return ResolvedCall::Other; + }; + + if let Some(local_def_id) = def_id.as_local() + && is_reportable_fn(self.cx.tcx, local_def_id) + { + return ResolvedCall::Local(local_def_id); + } + + if self.cx.tcx.is_const_fn(def_id) { + return ResolvedCall::NonLocalConst; + } + + ResolvedCall::Other + } + + fn apply_assignment(&mut self, lhs: &'tcx Expr<'tcx>, rhs: &'tcx Expr<'tcx>) { + let Some(local) = self.lhs_local(lhs) else { + return; + }; + + let dependency = self.expr_dependency(rhs); + self.state.env.bind_dependency(local, dependency); + self.set_callable_targets(local, self.expr_callable_targets(rhs)); + } + + fn apply_assign_op(&mut self, lhs: &'tcx Expr<'tcx>) { + let Some(local) = self.lhs_local(lhs) else { + return; + }; + + self.state + .env + .bind_dependency(local, ExprDependency::Runtime); + self.state.env.clear_callables(local); + } + + fn apply_let_binding(&mut self, local: &'tcx rustc_hir::LetStmt<'tcx>) { + if let Some(init) = local.init { + let dependency = self.expr_dependency(init); + let targets = self.expr_callable_targets(init); + self.bind_pattern(local.pat, dependency, targets); + } else { + self.state.env.bind_runtime_pattern(local.pat); + } + } + + /// Classify what a path depends on. Const items, const params, and immutable statics are + /// treated as effectively constant for this lint; unresolved or mutable values are runtime. + fn path_dependency(&self, qpath: &QPath<'tcx>, hir_id: HirId) -> ExprDependency { + match self.cx.typeck.qpath_res(qpath, hir_id) { + Res::Local(local) => self + .state + .env + .get_dependency(local) + .cloned() + .unwrap_or(ExprDependency::Runtime), + Res::Def( + DefKind::Const { .. } | DefKind::AssocConst { .. } | DefKind::ConstParam, + _, + ) => ExprDependency::Constant, + Res::Def( + DefKind::Static { + mutability: Mutability::Not, + .. + }, + _, + ) => ExprDependency::Constant, + _ => ExprDependency::Runtime, + } + } + + /// Evaluate a block expression while respecting scope-local rebinding from `let` statements and + /// assignments inside the block. + fn block_dependency(&mut self, block: &'tcx rustc_hir::Block<'tcx>) -> ExprDependency { + self.with_scope(|this| { + for stmt in block.stmts { + match stmt.kind { + StmtKind::Let(local) => this.apply_let_binding(local), + StmtKind::Expr(expr) | StmtKind::Semi(expr) => match expr.kind { + rustc_hir::ExprKind::Assign(lhs, rhs, _) => this.apply_assignment(lhs, rhs), + rustc_hir::ExprKind::AssignOp(_, lhs, _) => this.apply_assign_op(lhs), + _ => {} + }, + StmtKind::Item(..) => {} + } + } + + block + .expr + .map(|expr| this.expr_dependency(expr)) + .unwrap_or(ExprDependency::Constant) + }) + } + + /// Re-express a local helper's return-value dependency in terms of the caller's actual + /// arguments. This is what allows `helper(x)` to stay parameter-sensitive instead of collapsing + /// to a generic runtime value. + fn mapped_callee_return_dependency( + &mut self, + callee: LocalDefId, + actual_args: I, + ) -> Option + where + I: IntoIterator>, + { + let callee_summary = self.cx.callee_summaries.get(&callee)?; + let actual_args: Vec<_> = actual_args + .into_iter() + .map(|arg| self.expr_dependency(arg)) + .collect(); + + Some(match &callee_summary.return_dependency { + ExprDependency::Constant => ExprDependency::Constant, + ExprDependency::Param(params) => self.project_param_dependencies(&actual_args, params), + ExprDependency::Runtime => ExprDependency::Runtime, + }) + } + + /// Classify what value flows into an expression. This is the shared local reasoning that both + /// direct `build_assert!` uses and propagated caller requirements build on top of. + fn expr_dependency(&mut self, expr: &'tcx Expr<'tcx>) -> ExprDependency { + match expr.kind { + rustc_hir::ExprKind::ConstBlock(..) | rustc_hir::ExprKind::Lit(..) => { + ExprDependency::Constant + } + rustc_hir::ExprKind::Path(ref qpath) => self.path_dependency(qpath, expr.hir_id), + rustc_hir::ExprKind::Use(inner, _) + | rustc_hir::ExprKind::Unary(_, inner) + | rustc_hir::ExprKind::Cast(inner, _) + | rustc_hir::ExprKind::Type(inner, _) + | rustc_hir::ExprKind::DropTemps(inner) + | rustc_hir::ExprKind::Field(inner, _) + | rustc_hir::ExprKind::AddrOf(_, _, inner) + | rustc_hir::ExprKind::UnsafeBinderCast(_, inner, _) => self.expr_dependency(inner), + rustc_hir::ExprKind::Binary(_, lhs, rhs) + | rustc_hir::ExprKind::AssignOp(_, lhs, rhs) + | rustc_hir::ExprKind::Index(lhs, rhs, _) => { + ExprDependency::combine([self.expr_dependency(lhs), self.expr_dependency(rhs)]) + } + rustc_hir::ExprKind::Assign(_, rhs, _) | rustc_hir::ExprKind::Repeat(rhs, _) => { + self.expr_dependency(rhs) + } + rustc_hir::ExprKind::Array(exprs) | rustc_hir::ExprKind::Tup(exprs) => { + self.combine_exprs(exprs.iter()) + } + rustc_hir::ExprKind::Block(block, _) => self.block_dependency(block), + rustc_hir::ExprKind::Struct(_, fields, tail) => { + let mut exprs = Vec::with_capacity(fields.len() + 1); + for field in fields { + exprs.push(field.expr); + } + if let rustc_hir::StructTailExpr::Base(expr) = tail { + exprs.push(expr); + } + self.combine_exprs(exprs) + } + rustc_hir::ExprKind::If(condition, then_expr, else_expr) => { + let mut exprs = vec![condition, then_expr]; + if let Some(expr) = else_expr { + exprs.push(expr); + } + self.combine_exprs(exprs) + } + rustc_hir::ExprKind::Match(scrutinee, arms, _) => { + let mut dependencies = Vec::with_capacity(1 + arms.len() * 2); + dependencies.push(self.expr_dependency(scrutinee)); + + for arm in arms { + if let Some(guard) = arm.guard { + dependencies.push(self.expr_dependency(guard)); + } + dependencies.push(self.expr_dependency(arm.body)); + } + + ExprDependency::combine(dependencies) + } + rustc_hir::ExprKind::Call(callee, args) => self.call_expr_dependency(callee, args), + rustc_hir::ExprKind::MethodCall(_, receiver, args, _) => { + self.method_call_expr_dependency(expr, receiver, args) + } + _ => ExprDependency::Runtime, + } + } + + fn call_expr_dependency( + &mut self, + callee: &'tcx Expr<'tcx>, + args: &'tcx [Expr<'tcx>], + ) -> ExprDependency { + let rustc_hir::ExprKind::Path(ref qpath) = callee.kind else { + return ExprDependency::Runtime; + }; + let resolved = self.cx.typeck.qpath_res(qpath, callee.hir_id); + let args_dependency = self.combine_exprs(args.iter()); + + // Tuple/struct constructors are const when all inputs are const even though + // they surface as calls in HIR. + if matches!(args_dependency, ExprDependency::Constant) + && matches!(resolved, Res::Def(DefKind::Ctor(..), _)) + { + return ExprDependency::Constant; + } + + // If the callee is local and already summarized, project its return-value + // dependency back onto these actual arguments instead of losing precision. + if let Some(mapped_dependency) = self.mapped_call_dependency( + self.resolve_direct_call(callee), + args.iter(), + args_dependency.clone(), + ) { + return mapped_dependency; + } + + ExprDependency::Runtime + } + + fn mapped_call_dependency( + &mut self, + resolved: ResolvedCall, + actual_args: I, + constant_dependency: ExprDependency, + ) -> Option + where + I: Clone + IntoIterator>, + { + match resolved { + ResolvedCall::Local(local_def_id) => { + self.mapped_callee_return_dependency(local_def_id, actual_args) + } + ResolvedCall::NonLocalConst + if matches!(constant_dependency, ExprDependency::Constant) => + { + Some(ExprDependency::Constant) + } + ResolvedCall::Other | ResolvedCall::NonLocalConst => None, + } + } + + fn method_call_expr_dependency( + &mut self, + expr: &'tcx Expr<'tcx>, + receiver: &'tcx Expr<'tcx>, + args: &'tcx [Expr<'tcx>], + ) -> ExprDependency { + let dependency = self.combine_exprs(std::iter::once(receiver).chain(args.iter())); + + // Methods on local impls use the same summary projection as free functions, but + // include the receiver as argument zero. + if let Some(mapped_dependency) = self.mapped_call_dependency( + self.resolve_method_call(expr), + std::iter::once(receiver).chain(args.iter()), + dependency.clone(), + ) { + return mapped_dependency; + } + + ExprDependency::Runtime + } + + fn lhs_local(&self, expr: &'tcx Expr<'tcx>) -> Option { + if let rustc_hir::ExprKind::Path(ref qpath) = expr.kind + && let Res::Local(local) = self.cx.typeck.qpath_res(qpath, expr.hir_id) + { + return Some(local); + } + + None + } + + /// Propagate the callee's inline requirement through one direct call by looking only at the + /// callee parameters that actually matter to its `build_assert!` condition. + fn propagate_callee_requirement( + &mut self, + callee: LocalDefId, + call_span: Span, + actual_args: I, + ) where + I: IntoIterator>, + { + let Some(callee_summary) = self.cx.callee_summaries.get(&callee) else { + return; + }; + if callee_summary.requirement.param_dependencies.is_empty() { + return; + } + + let actual_args: Vec<_> = actual_args + .into_iter() + .map(|arg| self.expr_dependency(arg)) + .collect(); + + let dependency = self.project_param_dependencies( + &actual_args, + &callee_summary.requirement.param_dependencies, + ); + + self.state.summary.requirement.record_propagated_use( + dependency, + RequirementOrigin::Propagated { callee, call_span }, + ); + } + + /// Indirect edges are pre-resolved to a set of possible local callees. Apply the same + /// parameter-sensitive propagation to each candidate. + fn propagate_indirect_call_targets( + &mut self, + targets: CallableTargets, + call_span: Span, + actual_args: I, + ) where + I: Clone + IntoIterator>, + { + for callee in targets { + if is_reportable_fn(self.cx.tcx, callee) { + self.propagate_callee_requirement(callee, call_span, actual_args.clone()); + } + } + } + + /// Follow a function-pointer-like call when the callee expression itself carries a local target + /// set, e.g. `let f = helper; f(x)`. + fn maybe_propagate_indirect_call( + &mut self, + callee: &'tcx Expr<'tcx>, + args: &'tcx [Expr<'tcx>], + call_span: Span, + ) { + let targets = self.expr_callable_targets(callee); + if !targets.is_empty() { + self.propagate_indirect_call_targets(targets, call_span, args.iter()); + } + } + + /// Follow dyn-dispatch and other mono-resolved method-call edges that were keyed to the source + /// callsite during candidate collection. + fn maybe_propagate_indirect_method_call( + &mut self, + hir_id: HirId, + receiver: &'tcx Expr<'tcx>, + args: &'tcx [Expr<'tcx>], + call_span: Span, + ) { + let targets = self.indirect_targets_for_callsite(hir_id); + if !targets.is_empty() { + self.propagate_indirect_call_targets( + targets, + call_span, + std::iter::once(receiver).chain(args.iter()), + ); + } + } +} + +impl<'tcx> hir_visit::Visitor<'tcx> for SummaryAnalyzer<'_, 'tcx> { + fn visit_block(&mut self, block: &'tcx rustc_hir::Block<'tcx>) { + self.with_scope(|this| hir_visit::walk_block(this, block)); + } + + fn visit_stmt(&mut self, stmt: &'tcx Stmt<'tcx>) { + match stmt.kind { + StmtKind::Let(local) => { + if let Some(init) = local.init { + self.visit_expr(init); + } + + self.apply_let_binding(local); + + if let Some(els) = local.els { + self.visit_block(els); + } + } + StmtKind::Expr(expr) | StmtKind::Semi(expr) => self.visit_expr(expr), + StmtKind::Item(item) => hir_visit::walk_item(self, self.cx.tcx.hir_item(item)), + } + } + + fn visit_expr(&mut self, expr: &'tcx Expr<'tcx>) { + // Expanded HIR nodes that still carry `build_assert!` ancestry point back to the whole + // macro invocation. Remember the recovered source condition span here, then match it + // against ordinary source-level expressions later in the same traversal. + if let Some(condition) = build_assert_condition(self.cx.tcx, expr, self.cx.build_assert) { + self.state + .build_assert_conditions + .entry(condition.condition_span) + .or_insert(condition.call_site); + } + + let source_span = expr.span.source_callsite(); + if let Some(&call_site) = self.state.build_assert_conditions.get(&source_span) + && self.state.seen_build_assert_callsites.insert(call_site) + { + let dependency = self.expr_dependency(expr); + if matches!(dependency, ExprDependency::Constant) { + self.state.summary.record_const_only_build_assert(call_site); + } else { + self.state + .summary + .requirement + .record_direct_use(dependency, call_site); + } + } + + match expr.kind { + rustc_hir::ExprKind::Call(callee, args) => { + self.visit_expr(callee); + for arg in args { + self.visit_expr(arg); + } + + if let ResolvedCall::Local(local_def_id) = self.resolve_direct_call(callee) { + self.propagate_callee_requirement(local_def_id, expr.span, args.iter()); + } else { + self.maybe_propagate_indirect_call(callee, args, expr.span); + } + } + rustc_hir::ExprKind::MethodCall(_, receiver, args, _) => { + self.visit_expr(receiver); + for arg in args { + self.visit_expr(arg); + } + + if let ResolvedCall::Local(local_def_id) = self.resolve_method_call(expr) + && self.cx.callee_summaries.contains_key(&local_def_id) + { + self.propagate_callee_requirement( + local_def_id, + expr.span, + std::iter::once(receiver).chain(args.iter()), + ); + } else { + self.maybe_propagate_indirect_method_call( + expr.hir_id, + receiver, + args, + expr.span, + ); + } + } + rustc_hir::ExprKind::Assign(lhs, rhs, _) => { + self.visit_expr(rhs); + self.visit_expr(lhs); + self.apply_assignment(lhs, rhs); + } + rustc_hir::ExprKind::AssignOp(_, lhs, rhs) => { + self.visit_expr(rhs); + self.visit_expr(lhs); + self.apply_assign_op(lhs); + } + rustc_hir::ExprKind::Ret(Some(value)) => { + self.visit_expr(value); + let dependency = self.expr_dependency(value); + self.state.return_dependencies.push(dependency); + } + _ => hir_visit::walk_expr(self, expr), + } + } +} + +/// Analyze one function body against the current fixed-point summaries of its callees. +fn analyze_body<'tcx>( + tcx: TyCtxt<'tcx>, + owner: LocalDefId, + typeck: &TypeckResults<'tcx>, + build_assert: Option, + callee_summaries: &FunctionSummaries, + indirect_candidates: &IndirectCandidates, + body: &'tcx Body<'tcx>, +) -> FunctionSummary { + let mut analyzer = SummaryAnalyzer::new( + tcx, + owner, + typeck, + build_assert, + callee_summaries, + indirect_candidates, + body, + ); + hir_visit::Visitor::visit_body(&mut analyzer, body); + analyzer.finish_summary(body) +} + +fn compute_summaries<'tcx>( + tcx: TyCtxt<'tcx>, + bodies: &FxHashMap>, + body_owners: &[LocalDefId], + build_assert: Option, + indirect_candidates: &IndirectCandidates, +) -> FunctionSummaries { + let mut summaries = FunctionSummaries::default(); + + // Iterate to a fixpoint because one local helper's summary may depend on another helper's + // return dependency or inline requirement. + loop { + let mut changed = false; + + for &def_id in body_owners { + let body = bodies[&def_id]; + let summary = analyze_body( + tcx, + def_id, + tcx.typeck(def_id), + build_assert, + &summaries, + indirect_candidates, + body, + ); + + if summaries.get(&def_id) != Some(&summary) { + summaries.insert(def_id, summary); + changed = true; + } + } + + if !changed { + break; + } + } + + summaries +} + +pub struct BuildAssertLints<'tcx> { + pub cx: &'tcx AnalysisCtxt<'tcx>, + pub bodies: FxHashMap>, +} + +impl_lint_pass!(BuildAssertLints<'_> => [BUILD_ASSERT_NOT_INLINED, BUILD_ASSERT_CAN_BE_CONST]); + +impl<'tcx> LateLintPass<'tcx> for BuildAssertLints<'tcx> { + fn check_fn( + &mut self, + _: &LateContext<'tcx>, + _: hir_visit::FnKind<'tcx>, + _: &'tcx rustc_hir::FnDecl<'tcx>, + body: &'tcx Body<'tcx>, + _: Span, + def_id: LocalDefId, + ) { + if is_reportable_fn(self.cx.tcx, def_id) { + self.bodies.insert(def_id, body); + } + } + + fn check_crate_post(&mut self, cx: &LateContext<'tcx>) { + let build_assert = self + .cx + .get_klint_diagnostic_item(crate::symbol::build_assert); + + let mut body_owners: Vec<_> = self.bodies.keys().copied().collect(); + body_owners.sort_by_key(|&def_id| cx.tcx.def_span(def_id).lo()); + let indirect_candidates = collect_indirect_candidates(cx.tcx, &self.bodies, &body_owners); + let summaries = compute_summaries( + cx.tcx, + &self.bodies, + &body_owners, + build_assert, + &indirect_candidates, + ); + + for def_id in body_owners { + let Some(summary) = summaries.get(&def_id) else { + continue; + }; + + for &span in &summary.const_only_build_asserts { + emit_build_assert_can_be_const(cx, span); + } + + if summary.requirement.requires_inline() + && !has_inline_always(cx.tcx, def_id.to_def_id()) + { + emit_build_assert_not_inlined(cx, def_id, summary); + } + } + } +} diff --git a/src/build_assert_can_be_const.rs b/src/build_assert_can_be_const.rs new file mode 100644 index 0000000..f3ec853 --- /dev/null +++ b/src/build_assert_can_be_const.rs @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: MIT OR Apache-2.0 + +use rustc_lint::{LateContext, LintContext}; +use rustc_session::declare_tool_lint; +use rustc_span::Span; + +use crate::diagnostic::ClosureDiag; + +declare_tool_lint! { + pub klint::BUILD_ASSERT_CAN_BE_CONST, + Warn, + "build_assert! does not depend on runtime values and can be written as a const assert" +} + +pub(crate) fn emit_build_assert_can_be_const(cx: &LateContext<'_>, span: Span) { + cx.emit_span_lint( + BUILD_ASSERT_CAN_BE_CONST, + span, + ClosureDiag(|diag| { + diag.primary_message( + "this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead", + ); + diag.span_note( + span, + "this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path", + ); + }), + ); +} diff --git a/src/build_assert_not_inlined.rs b/src/build_assert_not_inlined.rs new file mode 100644 index 0000000..b08f4f9 --- /dev/null +++ b/src/build_assert_not_inlined.rs @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: MIT OR Apache-2.0 + +use rustc_hir::def_id::{DefId, LocalDefId}; +use rustc_lint::{LateContext, LintContext}; +use rustc_middle::ty::TyCtxt; +use rustc_session::declare_tool_lint; + +use crate::build_assert::{FunctionSummary, RequirementOrigin}; +use crate::diagnostic::ClosureDiag; + +declare_tool_lint! { + pub klint::BUILD_ASSERT_NOT_INLINED, + Warn, + "function depends on build_assert! but is not marked #[inline(always)]" +} + +/// This lint is about the source-level contract of user-authored functions, so only +/// `#[inline(always)]` counts as satisfying it. +pub(crate) fn has_inline_always(tcx: TyCtxt<'_>, def_id: DefId) -> bool { + tcx.codegen_fn_attrs(def_id).inline.always() +} + +pub(crate) fn emit_build_assert_not_inlined( + cx: &LateContext<'_>, + def_id: LocalDefId, + summary: &FunctionSummary, +) { + cx.emit_span_lint( + BUILD_ASSERT_NOT_INLINED, + cx.tcx.def_span(def_id), + ClosureDiag(|diag| { + diag.primary_message( + "this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away", + ); + + match summary.requirement.origin { + Some(RequirementOrigin::Direct { span }) => { + diag.span_note( + span, + "`build_assert!` uses non-static values here and relies on the surrounding call chain being inlined", + ); + } + Some(RequirementOrigin::Propagated { callee, call_span }) => { + diag.span_note( + call_span, + format!( + "this call passes non-static values into `{}` which must be inlined for `build_assert!` to optimize away", + cx.tcx.def_path_str(callee.to_def_id()) + ), + ); + } + None => {} + } + }), + ); +} diff --git a/src/diagnostic/mod.rs b/src/diagnostic/mod.rs index ba129bd..519223f 100644 --- a/src/diagnostic/mod.rs +++ b/src/diagnostic/mod.rs @@ -1,5 +1,6 @@ pub(crate) mod use_stack; +use rustc_errors::{Diag, DiagCtxtHandle, Diagnostic, Level}; use rustc_middle::ty::PseudoCanonicalInput; pub struct PolyDisplay<'a, 'tcx, T>(pub &'a PseudoCanonicalInput<'tcx, T>); @@ -23,3 +24,13 @@ where Ok(()) } } + +pub(crate) struct ClosureDiag)>(pub F); + +impl<'a, F: FnOnce(&mut Diag<'_, ()>)> Diagnostic<'a, ()> for ClosureDiag { + fn into_diag(self, dcx: DiagCtxtHandle<'a>, level: Level) -> Diag<'a, ()> { + let mut lint = Diag::new(dcx, level, ""); + (self.0)(&mut lint); + lint + } +} diff --git a/src/diagnostic_items/out_of_band.rs b/src/diagnostic_items/out_of_band.rs index 49d8428..f56a4a9 100644 --- a/src/diagnostic_items/out_of_band.rs +++ b/src/diagnostic_items/out_of_band.rs @@ -1,22 +1,28 @@ //! Out-of-band attributes attached without source code changes. -use rustc_hir::def::{DefKind, Res}; -use rustc_hir::def_id::{CRATE_DEF_ID, DefId, LOCAL_CRATE}; +use rustc_hir::def::DefKind; +use rustc_hir::def_id::{DefId, LOCAL_CRATE}; use rustc_hir::diagnostic_items::DiagnosticItems; use rustc_middle::middle::exported_symbols::ExportedSymbol; use rustc_middle::ty::TyCtxt; pub fn infer_missing_items<'tcx>(tcx: TyCtxt<'tcx>, items: &mut DiagnosticItems) { - if !items.name_to_id.contains_key(&crate::symbol::build_error) { - if let Some(def_id) = infer_build_error_diagnostic_item(tcx) { - super::collect_item(tcx, items, crate::symbol::build_error, def_id); - } + if !items.name_to_id.contains_key(&crate::symbol::build_error) + && let Some(def_id) = infer_build_error_diagnostic_item(tcx) + { + super::collect_item(tcx, items, crate::symbol::build_error, def_id); } - if !items.name_to_id.contains_key(&crate::symbol::c_str) { - if let Some(def_id) = infer_c_str_diagnostic_item(tcx) { - super::collect_item(tcx, items, crate::symbol::c_str, def_id); - } + if !items.name_to_id.contains_key(&crate::symbol::build_assert) + && let Some(def_id) = infer_build_assert_diagnostic_item(tcx) + { + super::collect_item(tcx, items, crate::symbol::build_assert, def_id); + } + + if !items.name_to_id.contains_key(&crate::symbol::c_str) + && let Some(def_id) = infer_c_str_diagnostic_item(tcx) + { + super::collect_item(tcx, items, crate::symbol::c_str, def_id); } } @@ -32,21 +38,39 @@ pub fn infer_build_error_diagnostic_item<'tcx>(tcx: TyCtxt<'tcx>) -> Option(tcx: TyCtxt<'tcx>) -> Option { +fn infer_local_macro_diagnostic_item<'tcx>( + tcx: TyCtxt<'tcx>, + expected_path: &str, +) -> Option { + let mut matches = tcx + .hir_crate_items(()) + .owners() + .map(|owner| owner.to_def_id()) + .filter(|&def_id| { + matches!(tcx.def_kind(def_id), DefKind::Macro(_)) + && tcx.def_path_str(def_id) == expected_path + }); + + let def_id = matches.next()?; + matches.next().is_none().then_some(def_id) +} + +pub fn infer_build_assert_diagnostic_item<'tcx>(tcx: TyCtxt<'tcx>) -> Option { let name = tcx.crate_name(LOCAL_CRATE); if name != crate::symbol::kernel { return None; } - let c_str = tcx - .module_children_local(CRATE_DEF_ID) - .iter() - .find(|c| { - c.ident.name == crate::symbol::c_str && matches!(c.res, Res::Def(DefKind::Macro(_), _)) - })? - .res - .def_id(); + infer_local_macro_diagnostic_item(tcx, "kernel::prelude::build_assert") +} + +pub fn infer_c_str_diagnostic_item<'tcx>(tcx: TyCtxt<'tcx>) -> Option { + let name = tcx.crate_name(LOCAL_CRATE); + + if name != crate::symbol::kernel { + return None; + } - Some(c_str) + infer_local_macro_diagnostic_item(tcx, "kernel::c_str") } diff --git a/src/infallible_allocation.rs b/src/infallible_allocation.rs index a415d4a..96816af 100644 --- a/src/infallible_allocation.rs +++ b/src/infallible_allocation.rs @@ -2,14 +2,14 @@ // // SPDX-License-Identifier: MIT OR Apache-2.0 -use rustc_data_structures::fx::{FxHashMap, FxHashSet}; -use rustc_errors::{Diag, DiagCtxtHandle, Diagnostic, Level}; +use rustc_data_structures::fx::FxHashSet; use rustc_lint::{LateContext, LateLintPass, LintContext}; -use rustc_middle::mir::mono::MonoItem; use rustc_middle::ty::Instance; use rustc_session::{declare_lint_pass, declare_tool_lint}; -use rustc_span::{Spanned, sym}; +use rustc_span::sym; +use crate::diagnostic::ClosureDiag; +use crate::mono_graph::collect_instance_use_graph; use crate::monomorphize_collector::MonoItemCollectionStrategy; declare_tool_lint! { @@ -20,73 +20,15 @@ declare_tool_lint! { declare_lint_pass!(InfallibleAllocation => [INFALLIBLE_ALLOCATION]); -struct ClosureDiag)>(F); - -impl<'a, F: FnOnce(&mut Diag<'_, ()>)> Diagnostic<'a, ()> for ClosureDiag { - fn into_diag(self, dcx: DiagCtxtHandle<'a>, level: Level) -> Diag<'a, ()> { - let mut lint = Diag::new(dcx, level, ""); - (self.0)(&mut lint); - lint - } -} - fn is_generic_fn<'tcx>(instance: Instance<'tcx>) -> bool { instance.args.non_erasable_generics().next().is_some() } impl<'tcx> LateLintPass<'tcx> for InfallibleAllocation { fn check_crate(&mut self, cx: &LateContext<'tcx>) { - // Collect all mono items to be codegened with this crate. Discard the inline map, it does - // not contain enough information for us; we will collect them ourselves later. - // - // Use eager mode here so dead code is also linted on. - let access_map = super::monomorphize_collector::collect_crate_mono_items( - cx.tcx, - MonoItemCollectionStrategy::Eager, - ) - .1; - - // Build a forward and backward dependency graph with span information. - let mut forward = FxHashMap::default(); - let mut backward = FxHashMap::<_, Vec<_>>::default(); - - access_map.for_each_item_and_its_used_items(|accessor, accessees| { - let accessor = match accessor { - MonoItem::Static(s) => Instance::mono(cx.tcx, s), - MonoItem::Fn(v) => v, - _ => return, - }; - - let fwd_list = forward - .entry(accessor) - .or_insert_with(|| Vec::with_capacity(accessees.len())); - let mut def_span = None; - - for accessee in accessees { - let accessee_node = match accessee.node { - MonoItem::Static(s) => Instance::mono(cx.tcx, s), - MonoItem::Fn(v) => v, - _ => return, - }; - - // For const-evaluated items, they're collected from CTFE alloc, which does not have span - // information. Synthesize one with the accessor. - let span = if accessee.span.is_dummy() { - *def_span.get_or_insert_with(|| cx.tcx.def_span(accessor.def_id())) - } else { - accessee.span - }; - - fwd_list.push(Spanned { - node: accessee_node, - span, - }); - backward.entry(accessee_node).or_default().push(Spanned { - node: accessor, - span, - }); - } - }); + let graph = collect_instance_use_graph(cx.tcx, MonoItemCollectionStrategy::Eager); + let forward = &graph.forward; + let backward = &graph.backward; // Find all fallible functions let mut visited = FxHashSet::default(); diff --git a/src/main.rs b/src/main.rs index a7e60a5..b1d9971 100755 --- a/src/main.rs +++ b/src/main.rs @@ -59,6 +59,9 @@ mod ctxt; mod atomic_context; mod attribute; mod binary_analysis; +mod build_assert; +mod build_assert_can_be_const; +mod build_assert_not_inlined; mod diagnostic; mod diagnostic_items; mod driver; @@ -66,6 +69,7 @@ mod hir_lints; mod infallible_allocation; mod lattice; mod mir; +mod mono_graph; mod monomorphize_collector; mod preempt_count; mod serde; @@ -110,6 +114,8 @@ impl Callbacks for MyCallbacks { infallible_allocation::INFALLIBLE_ALLOCATION, atomic_context::ATOMIC_CONTEXT, binary_analysis::stack_size::STACK_FRAME_TOO_LARGE, + build_assert_can_be_const::BUILD_ASSERT_CAN_BE_CONST, + build_assert_not_inlined::BUILD_ASSERT_NOT_INLINED, hir_lints::c_str_literal::C_STR_LITERAL, hir_lints::not_using_prelude::NOT_USING_PRELUDE, ]); @@ -133,6 +139,13 @@ impl Callbacks for MyCallbacks { cx: driver::cx::(tcx), }) }); + + lint_store.register_late_pass(|tcx| { + Box::new(build_assert::BuildAssertLints { + cx: driver::cx::(tcx), + bodies: Default::default(), + }) + }); })); } diff --git a/src/mono_graph.rs b/src/mono_graph.rs new file mode 100644 index 0000000..e1f873f --- /dev/null +++ b/src/mono_graph.rs @@ -0,0 +1,256 @@ +// SPDX-License-Identifier: MIT OR Apache-2.0 + +use rustc_data_structures::fx::{FxHashMap, FxHashSet}; +use rustc_hir::def::{DefKind, Res}; +use rustc_hir::def_id::{DefId, LocalDefId}; +use rustc_hir::intravisit as hir_visit; +use rustc_hir::{Body, Expr, HirId}; +use rustc_middle::mir::mono::MonoItem; +use rustc_middle::ty::{Instance, TyCtxt, TypeckResults}; +use rustc_span::{Span, Spanned}; + +use crate::monomorphize_collector::{MonoItemCollectionStrategy, collect_crate_mono_items}; + +pub(crate) type CallableTargets = FxHashSet; +pub(crate) type IndirectCallsiteMap = FxHashMap; +pub(crate) type IndirectCandidates = FxHashMap; + +pub struct InstanceUseGraph<'tcx> { + pub forward: FxHashMap, Vec>>>, + pub backward: FxHashMap, Vec>>>, +} + +fn mono_item_instance<'tcx>(tcx: TyCtxt<'tcx>, item: MonoItem<'tcx>) -> Option> { + match item { + MonoItem::Static(def_id) => Some(Instance::mono(tcx, def_id)), + MonoItem::Fn(instance) => Some(instance), + _ => None, + } +} + +pub fn collect_instance_use_graph<'tcx>( + tcx: TyCtxt<'tcx>, + strategy: MonoItemCollectionStrategy, +) -> InstanceUseGraph<'tcx> { + let (mono_items, access_map) = collect_crate_mono_items(tcx, strategy); + + let mut forward = FxHashMap::default(); + let mut backward = FxHashMap::, Vec>>>::default(); + + let _ = mono_items; + + access_map.for_each_item_and_its_used_items(|accessor, accessees| { + let Some(accessor) = mono_item_instance(tcx, accessor) else { + return; + }; + + let fwd_list = forward + .entry(accessor) + .or_insert_with(|| Vec::with_capacity(accessees.len())); + let mut accessor_span = None; + + for accessee in accessees { + let Some(accessee_node) = mono_item_instance(tcx, accessee.node) else { + continue; + }; + + // For const-evaluated items, they're collected from CTFE alloc, which does not have + // span information. Synthesize one with the accessor. + let span = if accessee.span.is_dummy() { + *accessor_span.get_or_insert_with(|| tcx.def_span(accessor.def_id())) + } else { + accessee.span + }; + + fwd_list.push(Spanned { + node: accessee_node, + span, + }); + backward.entry(accessee_node).or_default().push(Spanned { + node: accessor, + span, + }); + } + }); + + InstanceUseGraph { forward, backward } +} + +#[derive(Clone, Copy)] +struct CallsiteSpan { + hir_id: HirId, + span: Span, + trait_method: Option, +} + +struct CallsiteCollector<'a, 'tcx> { + typeck: &'a TypeckResults<'tcx>, + callsites: Vec, +} + +impl<'tcx> hir_visit::Visitor<'tcx> for CallsiteCollector<'_, 'tcx> { + fn visit_expr(&mut self, expr: &'tcx Expr<'tcx>) { + match expr.kind { + rustc_hir::ExprKind::Call(..) => { + self.callsites.push(CallsiteSpan { + hir_id: expr.hir_id, + span: expr.span, + trait_method: None, + }); + } + rustc_hir::ExprKind::MethodCall(..) => { + self.callsites.push(CallsiteSpan { + hir_id: expr.hir_id, + span: expr.span, + trait_method: self.typeck.type_dependent_def_id(expr.hir_id), + }); + } + _ => {} + } + hir_visit::walk_expr(self, expr); + } +} + +/// Map a mono-level use span back to the source call expression that owns it. Exact matches are +/// preferred; otherwise choose the smallest enclosing call expression. +fn resolve_callsite_hir_id(callsites: &[CallsiteSpan], span: Span) -> Option { + let mut best = None; + let mut best_width = u32::MAX; + + for callsite in callsites { + if callsite.span == span { + return Some(callsite.hir_id); + } + // MIR spans for indirect uses can point at a sub-expression; pick the narrowest enclosing + // source call expression so the analysis can key everything by `HirId`. + if callsite.span.lo() <= span.hi() && span.lo() <= callsite.span.hi() { + let width = callsite.span.hi().0 - callsite.span.lo().0; + if width < best_width { + best = Some(callsite.hir_id); + best_width = width; + } + } + } + + best +} + +/// Check whether a local impl method is the concrete implementation of the given trait method. +/// This is used to recover dyn-dispatch callsites from mono edges that point at impl methods. +fn impl_matches_trait_method(tcx: TyCtxt<'_>, candidate: LocalDefId, trait_method: DefId) -> bool { + let Some(trait_local_def_id) = trait_method.as_local() else { + return false; + }; + let trait_def_id = tcx.parent(trait_local_def_id.into()); + let impl_def_id = tcx.parent(candidate.into()).expect_local(); + let rustc_hir::ItemKind::Impl(impl_) = &tcx.hir_expect_item(impl_def_id).kind else { + return false; + }; + let Some(of_trait) = impl_.of_trait else { + return false; + }; + + tcx.item_name(candidate.to_def_id()) == tcx.item_name(trait_method) + && matches!( + of_trait.trait_ref.path.res, + Res::Def(DefKind::Trait, impl_trait_def_id) if impl_trait_def_id == trait_def_id + ) +} + +/// Some vtable-related mono edges do not point at the eventual method-call expression directly. +/// When that happens, match the impl method back to a source method call using trait identity. +fn resolve_trait_method_callsite_hir_id( + tcx: TyCtxt<'_>, + callsites: &[CallsiteSpan], + span: Span, + callee: LocalDefId, +) -> Option { + let mut best = None; + let mut best_width = u32::MAX; + + for callsite in callsites { + let Some(trait_method) = callsite.trait_method else { + continue; + }; + if !impl_matches_trait_method(tcx, callee, trait_method) { + continue; + } + // Vtable-related mono uses may point at the trait-object construction site instead of the + // eventual method call. Match them back to the source method call by trait/method identity. + if callsite.span.lo() <= span.hi() && span.lo() <= callsite.span.hi() { + let width = callsite.span.hi().0 - callsite.span.lo().0; + if width < best_width { + best = Some(callsite.hir_id); + best_width = width; + } + } + } + + best +} + +/// Precompute indirect-call candidates once from the monomorphized use graph and key them by +/// source `HirId`, so HIR-based analyses can stay purely callsite-based and parameter-sensitive. +pub(crate) fn collect_indirect_candidates<'tcx>( + tcx: TyCtxt<'tcx>, + bodies: &FxHashMap>, + body_owners: &[LocalDefId], +) -> IndirectCandidates { + let graph = collect_instance_use_graph(tcx, MonoItemCollectionStrategy::Eager); + let body_owners: FxHashSet<_> = body_owners.iter().copied().collect(); + let mut callsites = FxHashMap::>::default(); + let mut candidates = IndirectCandidates::default(); + + for (&def_id, &body) in bodies { + if !body_owners.contains(&def_id) { + continue; + } + let mut collector = CallsiteCollector { + typeck: tcx.typeck(def_id), + callsites: Vec::new(), + }; + hir_visit::Visitor::visit_body(&mut collector, body); + callsites.insert(def_id, collector.callsites); + } + + for (caller_instance, callees) in &graph.forward { + let Some(caller_def_id) = caller_instance.def_id().as_local() else { + continue; + }; + if !body_owners.contains(&caller_def_id) { + continue; + } + + let Some(caller_callsites) = callsites.get(&caller_def_id) else { + continue; + }; + let entry = candidates.entry(caller_def_id).or_default(); + for callee in callees { + let Some(callee_def_id) = callee.node.def_id().as_local() else { + continue; + }; + if matches!(tcx.def_kind(callee_def_id), DefKind::Fn | DefKind::AssocFn) { + // Resolve each mono edge to the source call expression once up front so the actual + // propagation logic can stay purely callsite-based. + let callsite_hir_id = resolve_callsite_hir_id(caller_callsites, callee.span) + .or_else(|| { + resolve_trait_method_callsite_hir_id( + tcx, + caller_callsites, + callee.span, + callee_def_id, + ) + }); + let Some(callsite_hir_id) = callsite_hir_id else { + continue; + }; + entry + .entry(callsite_hir_id) + .or_default() + .insert(callee_def_id); + } + } + } + + candidates +} diff --git a/src/symbol.rs b/src/symbol.rs index 5efeb68..a528a9a 100644 --- a/src/symbol.rs +++ b/src/symbol.rs @@ -55,6 +55,7 @@ def! { // Diagnostic items c_str, build_error, + build_assert, CONFIG_FRAME_WARN, } diff --git a/tests/ui/build_assert_can_be_const.rs b/tests/ui/build_assert_can_be_const.rs new file mode 100644 index 0000000..972c7af --- /dev/null +++ b/tests/ui/build_assert_can_be_const.rs @@ -0,0 +1,87 @@ +#![allow(klint::build_assert_not_inlined)] +#![deny(klint::build_assert_can_be_const)] + +unsafe extern "C" { + #[klint::diagnostic_item = "build_error"] + safe fn rust_build_error(); +} + +#[klint::diagnostic_item = "build_assert"] +macro_rules! build_assert { + ($expr:expr $(,)?) => { + if !$expr { + rust_build_error(); + } + }; + ($expr:expr, $msg:expr $(,)?) => { + if !$expr { + let _ = $msg; + rust_build_error(); + } + }; +} + +macro_rules! forward_build_assert { + ($expr:expr $(,)?) => { + build_assert!($expr) + }; +} + +const OFFSET: usize = 1; +const LIMIT: usize = 4; + +fn literal_const_only() { + build_assert!(1 < LIMIT); +} + +fn const_generic_only() { + build_assert!(OFFSET < N, "offset must stay in bounds"); +} + +fn wrapper_const_only() { + forward_build_assert!(OFFSET < LIMIT); +} + +fn helper() -> usize { + N - 1 +} + +fn helper_const_only() { + build_assert!(helper::() < N); +} + +fn const_match_only() { + build_assert!(match LIMIT { + 4 => true, + _ => false, + }); +} + +fn const_comment_comma() { + build_assert!(1 /* , */ < LIMIT); +} + +fn const_comment_comma_msg() { + build_assert!(1 /* , */ < LIMIT, "still const"); +} + +#[inline(always)] +fn runtime_dependent(offset: usize, n: usize) { + build_assert!(offset < n); +} + +fn runtime_through_helper(offset: usize) { + runtime_dependent(offset, LIMIT); +} + +fn main() { + literal_const_only(); + const_generic_only::(); + wrapper_const_only(); + helper_const_only::(); + const_match_only(); + const_comment_comma(); + const_comment_comma_msg(); + runtime_dependent(OFFSET, LIMIT); + runtime_through_helper(OFFSET); +} diff --git a/tests/ui/build_assert_can_be_const.stderr b/tests/ui/build_assert_can_be_const.stderr new file mode 100644 index 0000000..6cb1bbd --- /dev/null +++ b/tests/ui/build_assert_can_be_const.stderr @@ -0,0 +1,97 @@ +error: this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead + --> $DIR/build_assert_can_be_const.rs:34:5 + | +34 | build_assert!(1 < LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path + --> $DIR/build_assert_can_be_const.rs:34:5 + | +34 | build_assert!(1 < LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^ +note: the lint level is defined here + --> $DIR/build_assert_can_be_const.rs:2:9 + | + 2 | #![deny(klint::build_assert_can_be_const)] + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead + --> $DIR/build_assert_can_be_const.rs:38:5 + | +38 | build_assert!(OFFSET < N, "offset must stay in bounds"); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path + --> $DIR/build_assert_can_be_const.rs:38:5 + | +38 | build_assert!(OFFSET < N, "offset must stay in bounds"); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead + --> $DIR/build_assert_can_be_const.rs:42:5 + | +42 | forward_build_assert!(OFFSET < LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path + --> $DIR/build_assert_can_be_const.rs:42:5 + | +42 | forward_build_assert!(OFFSET < LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead + --> $DIR/build_assert_can_be_const.rs:50:5 + | +50 | build_assert!(helper::() < N); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path + --> $DIR/build_assert_can_be_const.rs:50:5 + | +50 | build_assert!(helper::() < N); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead + --> $DIR/build_assert_can_be_const.rs:54:5 + | +54 | / build_assert!(match LIMIT { +55 | | 4 => true, +56 | | _ => false, +57 | | }); + | |______^ + | +note: this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path + --> $DIR/build_assert_can_be_const.rs:54:5 + | +54 | / build_assert!(match LIMIT { +55 | | 4 => true, +56 | | _ => false, +57 | | }); + | |______^ + +error: this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead + --> $DIR/build_assert_can_be_const.rs:61:5 + | +61 | build_assert!(1 /* , */ < LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path + --> $DIR/build_assert_can_be_const.rs:61:5 + | +61 | build_assert!(1 /* , */ < LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this `build_assert!` does not depend on runtime values; prefer `const { assert!(...) }` instead + --> $DIR/build_assert_can_be_const.rs:65:5 + | +65 | build_assert!(1 /* , */ < LIMIT, "still const"); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this assertion is already effectively constant, so it does not need `build_assert!` to optimize away an error path + --> $DIR/build_assert_can_be_const.rs:65:5 + | +65 | build_assert!(1 /* , */ < LIMIT, "still const"); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: aborting due to 7 previous errors + diff --git a/tests/ui/build_assert_not_inlined.rs b/tests/ui/build_assert_not_inlined.rs new file mode 100644 index 0000000..97058d4 --- /dev/null +++ b/tests/ui/build_assert_not_inlined.rs @@ -0,0 +1,215 @@ +#![allow(klint::build_assert_can_be_const)] +#![deny(klint::build_assert_not_inlined)] + +unsafe extern "C" { + #[klint::diagnostic_item = "build_error"] + safe fn rust_build_error(); +} + +#[klint::diagnostic_item = "build_assert"] +macro_rules! build_assert { + ($expr:expr $(,)?) => { + if !$expr { + rust_build_error(); + } + }; + ($expr:expr, $msg:expr $(,)?) => { + if !$expr { + let _ = $msg; + rust_build_error(); + } + }; +} + +macro_rules! forward_build_assert { + ($expr:expr $(,)?) => { + build_assert!($expr) + }; +} + +const OFFSET: usize = 1; +const LIMIT: usize = 4; +static STATIC_LIMIT: usize = 8; + +fn literal_const_only() { + build_assert!(1 < LIMIT); +} + +fn const_only_direct() { + build_assert!(OFFSET < N); +} + +fn const_only_via_local() { + let offset = LIMIT - 1; + build_assert!(offset < LIMIT); +} + +fn const_only_via_static() { + let offset = STATIC_LIMIT - 1; + build_assert!(offset < STATIC_LIMIT); +} + +fn const_only_wrapper() { + forward_build_assert!(OFFSET < LIMIT); +} + +fn const_only_message_form() { + build_assert!(OFFSET < LIMIT, "offset must stay in bounds"); +} + +fn const_helper() -> usize { + N - 1 +} + +fn const_only_helper_call() { + build_assert!(const_helper::() < N); +} + +#[unsafe(no_mangle)] +fn const_only_entry() { + literal_const_only(); + const_only_direct::<4>(); + const_only_via_local(); + const_only_via_static(); + const_only_wrapper(); + const_only_helper_call::(); +} + +fn runtime_direct(offset: usize, n: usize) { + build_assert!(offset < n); +} + +fn passthrough(value: usize) -> usize { + value +} + +fn runtime_param_const_generic(offset: usize) { + build_assert!(offset < N); +} + +fn runtime_helper_call(offset: usize) { + build_assert!(passthrough(offset) < N); +} + +fn runtime_helper_caller(offset: usize) { + runtime_helper_call::(offset); +} + +fn runtime_local(offset: usize, n: usize) { + let current = offset; + build_assert!(current < n); +} + +fn runtime_match(offset: usize, n: usize) { + build_assert!(match offset { + 0 => true, + _ => offset < n, + }); +} + +fn runtime_caller(offset: usize, n: usize) { + runtime_direct(offset, n); +} + +#[unsafe(no_mangle)] +fn runtime_entry() { + runtime_caller(OFFSET, LIMIT); + runtime_param_const_generic::(OFFSET); + runtime_helper_call::(OFFSET); + runtime_helper_caller(OFFSET); + runtime_local(OFFSET, LIMIT); + runtime_match(OFFSET, LIMIT); +} + +fn runtime_wrapper(offset: usize, n: usize) { + forward_build_assert!(offset < n); +} + +fn runtime_wrapper_caller(offset: usize, n: usize) { + runtime_wrapper(offset, n); +} + +#[unsafe(no_mangle)] +fn wrapper_entry() { + runtime_wrapper_caller(OFFSET, LIMIT); +} + +#[inline(always)] +fn inline_runtime_direct(offset: usize, n: usize) { + build_assert!(offset < n); +} + +#[unsafe(no_mangle)] +fn inline_runtime_entry() { + inline_runtime_direct(OFFSET, LIMIT); +} + +fn runtime_fnptr_target(offset: usize) { + runtime_direct(offset, LIMIT); +} + +fn fn_pointer_entry(offset: usize) { + let f: fn(usize) = runtime_fnptr_target; + f(offset); +} + +fn fn_pointer_const_entry() { + let f: fn(usize) = runtime_fnptr_target; + f(OFFSET); +} + +fn fn_pointer_mixed_calls(offset: usize) { + let f: fn(usize) = runtime_fnptr_target; + f(OFFSET); + f(offset); +} + +trait RuntimeDispatch { + fn run(&self, offset: usize); +} + +trait ConstRuntimeDispatch { + fn run(&self); +} + +struct RuntimeChecker; +struct ConstRuntimeChecker; + +impl RuntimeDispatch for RuntimeChecker { + fn run(&self, offset: usize) { + runtime_direct(offset, LIMIT); + } +} + +impl ConstRuntimeDispatch for ConstRuntimeChecker { + fn run(&self) { + build_assert!(OFFSET < LIMIT); + } +} + +fn dyn_dispatch_entry(offset: usize) { + let checker: &dyn RuntimeDispatch = &RuntimeChecker; + checker.run(offset); +} + +fn dyn_dispatch_const_entry() { + let checker: &dyn RuntimeDispatch = &RuntimeChecker; + checker.run(OFFSET); +} + +fn dyn_dispatch_ambiguous_names(offset: usize) { + let runtime_checker: &dyn RuntimeDispatch = &RuntimeChecker; + let const_checker: &dyn ConstRuntimeDispatch = &ConstRuntimeChecker; + const_checker.run(); + runtime_checker.run(offset); +} + +fn partially_constant_caller(offset: usize) { + runtime_direct(offset, LIMIT); +} + +#[unsafe(no_mangle)] +#[inline(always)] +fn inline_wrapper(offset: usize) { + partially_constant_caller(offset); +} diff --git a/tests/ui/build_assert_not_inlined.stderr b/tests/ui/build_assert_not_inlined.stderr new file mode 100644 index 0000000..369ef9c --- /dev/null +++ b/tests/ui/build_assert_not_inlined.stderr @@ -0,0 +1,188 @@ + WARN klint::atomic_context Unable to determine property for FFI function `const_only_entry` + WARN klint::atomic_context Unable to determine property for FFI function `const_only_entry` + WARN klint::atomic_context Unable to determine property for FFI function `runtime_entry` + WARN klint::atomic_context Unable to determine property for FFI function `runtime_entry` + WARN klint::atomic_context Unable to determine property for FFI function `wrapper_entry` + WARN klint::atomic_context Unable to determine property for FFI function `wrapper_entry` + WARN klint::atomic_context Unable to determine property for FFI function `inline_runtime_entry` + WARN klint::atomic_context Unable to determine property for FFI function `inline_runtime_entry` + WARN klint::atomic_context Unable to determine property for FFI function `inline_wrapper` + WARN klint::atomic_context Unable to determine property for FFI function `inline_wrapper` +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:78:1 + | +78 | fn runtime_direct(offset: usize, n: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: `build_assert!` uses non-static values here and relies on the surrounding call chain being inlined + --> $DIR/build_assert_not_inlined.rs:79:5 + | +79 | build_assert!(offset < n); + | ^^^^^^^^^^^^^^^^^^^^^^^^^ +note: the lint level is defined here + --> $DIR/build_assert_not_inlined.rs:2:9 + | + 2 | #![deny(klint::build_assert_not_inlined)] + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:86:1 + | +86 | fn runtime_param_const_generic(offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: `build_assert!` uses non-static values here and relies on the surrounding call chain being inlined + --> $DIR/build_assert_not_inlined.rs:87:5 + | +87 | build_assert!(offset < N); + | ^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:90:1 + | +90 | fn runtime_helper_call(offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: `build_assert!` uses non-static values here and relies on the surrounding call chain being inlined + --> $DIR/build_assert_not_inlined.rs:91:5 + | +91 | build_assert!(passthrough(offset) < N); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:94:1 + | +94 | fn runtime_helper_caller(offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_helper_call` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:95:5 + | +95 | runtime_helper_call::(offset); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:98:1 + | + 98 | fn runtime_local(offset: usize, n: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: `build_assert!` uses non-static values here and relies on the surrounding call chain being inlined + --> $DIR/build_assert_not_inlined.rs:100:5 + | +100 | build_assert!(current < n); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:103:1 + | +103 | fn runtime_match(offset: usize, n: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: `build_assert!` uses non-static values here and relies on the surrounding call chain being inlined + --> $DIR/build_assert_not_inlined.rs:104:5 + | +104 | / build_assert!(match offset { +105 | | 0 => true, +106 | | _ => offset < n, +107 | | }); + | |______^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:110:1 + | +110 | fn runtime_caller(offset: usize, n: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_direct` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:111:5 + | +111 | runtime_direct(offset, n); + | ^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:124:1 + | +124 | fn runtime_wrapper(offset: usize, n: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: `build_assert!` uses non-static values here and relies on the surrounding call chain being inlined + --> $DIR/build_assert_not_inlined.rs:125:5 + | +125 | forward_build_assert!(offset < n); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:128:1 + | +128 | fn runtime_wrapper_caller(offset: usize, n: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_wrapper` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:129:5 + | +129 | runtime_wrapper(offset, n); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:147:1 + | +147 | fn runtime_fnptr_target(offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_direct` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:148:5 + | +148 | runtime_direct(offset, LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:151:1 + | +151 | fn fn_pointer_entry(offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_fnptr_target` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:153:5 + | +153 | f(offset); + | ^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:161:1 + | +161 | fn fn_pointer_mixed_calls(offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_fnptr_target` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:164:5 + | +164 | f(offset); + | ^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:179:5 + | +179 | fn run(&self, offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_direct` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:180:9 + | +180 | runtime_direct(offset, LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: this function depends on non-static values used by `build_assert!` and should be marked `#[inline(always)]`; otherwise its error path may fail to optimize away + --> $DIR/build_assert_not_inlined.rs:207:1 + | +207 | fn partially_constant_caller(offset: usize) { + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: this call passes non-static values into `runtime_direct` which must be inlined for `build_assert!` to optimize away + --> $DIR/build_assert_not_inlined.rs:208:5 + | +208 | runtime_direct(offset, LIMIT); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +error: aborting due to 14 previous errors + diff --git a/tests/ui/build_error.rs b/tests/ui/build_error.rs index 534cb70..f5aad48 100644 --- a/tests/ui/build_error.rs +++ b/tests/ui/build_error.rs @@ -1,3 +1,6 @@ +#![allow(klint::build_assert_can_be_const)] +#![allow(klint::build_assert_not_inlined)] + unsafe extern "C" { #[klint::diagnostic_item = "build_error"] safe fn rust_build_error(); diff --git a/tests/ui/build_error.stderr b/tests/ui/build_error.stderr index a565e94..a450318 100644 --- a/tests/ui/build_error.stderr +++ b/tests/ui/build_error.stderr @@ -1,23 +1,23 @@ WARN klint::atomic_context Unable to determine property for FFI function `gen_build_error` WARN klint::atomic_context Unable to determine property for FFI function `gen_build_error` error: this `build_error` reference is not optimized away - --> $DIR/build_error.rs:9:13 + --> $DIR/build_error.rs:12:13 | - 9 | rust_build_error(); +12 | rust_build_error(); | ^^^^^^^^^^^^^^^^^^ ... -16 | build_assert!(false); +19 | build_assert!(false); | -------------------- in this macro invocation | note: which is called from here - --> $DIR/build_error.rs:21:5 + --> $DIR/build_error.rs:24:5 | -21 | inline_call(); +24 | inline_call(); | ^^^^^^^^^^^^^ note: reference contained in `fn gen_build_error` - --> $DIR/build_error.rs:20:1 + --> $DIR/build_error.rs:23:1 | -20 | fn gen_build_error() { +23 | fn gen_build_error() { | ^^^^^^^^^^^^^^^^^^^^ = note: this error originates in the macro `build_assert` (in Nightly builds, run with -Z macro-backtrace for more info)