SearchGraph

Struct SearchGraph 

Source
pub struct SearchGraph<D: Delegate<Cx = X>, X: Cx = X> {
    root_depth: AvailableDepth,
    stack: Stack<X>,
    provisional_cache: HashMap<X::Input, Vec<ProvisionalCacheEntry<X>>>,
    _marker: PhantomData<D>,
}

Fields§

§root_depth: AvailableDepth§stack: Stack<X>§provisional_cache: HashMap<X::Input, Vec<ProvisionalCacheEntry<X>>>

The provisional cache contains entries for already computed goals which still depend on goals higher-up in the stack. We don’t move them to the global cache and track them locally instead. A provisional cache entry is only valid until the result of one of its cycle heads changes.

§_marker: PhantomData<D>

Implementations§

Source§

impl<D: Delegate<Cx = X>, X: Cx> SearchGraph<D>

Source

pub fn new(root_depth: usize) -> SearchGraph<D>

Source

fn update_parent_goal( stack: &mut Stack<X>, step_kind_from_parent: PathKind, required_depth_for_nested: usize, heads: impl Iterator<Item = (StackDepth, CycleHead)>, encountered_overflow: bool, context: UpdateParentGoalCtxt<'_, X>, )

Lazily update the stack entry for the parent goal. This behavior is shared between actually evaluating goals and using existing global cache entries to make sure they have the same impact on the remaining evaluation.

Source

pub fn is_empty(&self) -> bool

Source

pub fn debug_current_depth(&self) -> usize

The number of goals currently in the search graph. This should only be used for debugging purposes.

Source

fn cycle_path_kind( stack: &Stack<X>, step_kind_to_head: PathKind, head: StackDepth, ) -> PathKind

Whether the path from head to the current stack entry is inductive or coinductive.

The step_kind_to_head is used to add a single additional path segment to the path on the stack which completes the cycle. This given an inductive step AB which then cycles coinductively with A, we need to treat this cycle as coinductive.

Source

pub fn enter_single_candidate(&mut self)

Source

pub fn finish_single_candidate(&mut self) -> CandidateHeadUsages

Source

pub fn ignore_candidate_head_usages(&mut self, usages: CandidateHeadUsages)

Source

pub fn evaluate_root_goal_for_proof_tree( cx: X, root_depth: usize, input: X::Input, inspect: &mut D::ProofTreeBuilder, ) -> X::Result

Source

pub fn evaluate_goal( &mut self, cx: X, input: X::Input, step_kind_from_parent: PathKind, inspect: &mut D::ProofTreeBuilder, ) -> X::Result

Probably the most involved method of the whole solver.

While goals get computed via D::compute_goal, this function handles caching, overflow, and cycles.

Source

fn handle_overflow(&mut self, cx: X, input: X::Input) -> X::Result

Source

fn clear_dependent_provisional_results_for_rerun(&mut self)

When reevaluating a goal with a changed provisional result, all provisional cache entry which depend on this goal get invalidated.

Note that we keep provisional cache entries which accessed this goal as a cycle head, but don’t depend on its value.

Source

fn rebase_provisional_cache_entries( &mut self, stack_entry: &StackEntry<X>, mutate_result: impl FnMut(X::Input, X::Result) -> X::Result, )

A necessary optimization to handle complex solver cycles. A provisional cache entry relies on a set of cycle heads and the path towards these heads. When popping a cycle head from the stack after we’ve finished computing it, we can’t be sure that the provisional cache entry is still applicable. We need to keep the cache entries to prevent hangs.

This can be thought of as pretending to reevaluate the popped head as nested goals of this provisional result. For this to be correct, all cycles encountered while we’d reevaluate the cycle head as a nested goal must keep the same cycle kind. rustc-dev-guide chapter.

In case the popped cycle head failed to reach a fixpoint anything which depends on its provisional result is invalid. Actually discarding provisional cache entries in this case would cause hangs, so we instead change the result of dependant provisional cache entries to also be ambiguous. This causes some undesirable ambiguity for nested goals whose result doesn’t actually depend on this cycle head, but that’s acceptable to me.

Source

fn lookup_provisional_cache( &mut self, input: X::Input, step_kind_from_parent: PathKind, ) -> Option<X::Result>

Source

fn candidate_is_applicable( &self, step_kind_from_parent: PathKind, nested_goals: &NestedGoals<X>, ) -> bool

Even if there is a global cache entry for a given goal, we need to make sure evaluating this entry would not have ended up depending on either a goal already on the stack or a provisional cache entry.

Source

fn lookup_global_cache_untracked( &self, cx: X, input: X::Input, step_kind_from_parent: PathKind, available_depth: AvailableDepth, ) -> Option<X::Result>

Used when fuzzing the global cache. Accesses the global cache without updating the state of the search graph.

Source

fn lookup_global_cache( &mut self, cx: X, input: X::Input, step_kind_from_parent: PathKind, available_depth: AvailableDepth, ) -> Option<X::Result>

Try to fetch a previously computed result from the global cache, making sure to only do so if it would match the result of reevaluating this goal.

Source

fn check_cycle_on_stack( &mut self, cx: X, input: X::Input, step_kind_from_parent: PathKind, ) -> Option<X::Result>

Source

fn reached_fixpoint( &mut self, cx: X, stack_entry: &StackEntry<X>, usages: HeadUsages, result: X::Result, ) -> bool

Whether we’ve reached a fixpoint when evaluating a cycle head.

Source

fn evaluate_goal_in_task( &mut self, cx: X, input: X::Input, inspect: &mut D::ProofTreeBuilder, ) -> EvaluationResult<X>

When we encounter a coinductive cycle, we have to fetch the result of that cycle while we are still computing it. Because of this we continuously recompute the cycle until the result of the previous iteration is equal to the final result, at which point we are done.

Source

fn insert_global_cache( &mut self, cx: X, input: X::Input, evaluation_result: EvaluationResult<X>, dep_node: X::DepNodeIndex, )

When encountering a cycle, both inductive and coinductive, we only move the root into the global cache. We also store all other cycle participants involved.

We must not use the global cache entry of a root goal if a cycle participant is on the stack. This is necessary to prevent unstable results. See the comment of StackEntry::nested_goals for more details.

Auto Trait Implementations§

§

impl<D, X> DynSend for SearchGraph<D, X>
where <X as Cx>::Input: DynSend, D: DynSend, <X as Cx>::Result: DynSend,

§

impl<D, X> DynSync for SearchGraph<D, X>
where <X as Cx>::Input: DynSync, D: DynSync, <X as Cx>::Result: DynSync,

§

impl<D, X> Freeze for SearchGraph<D, X>

§

impl<D, X> RefUnwindSafe for SearchGraph<D, X>
where D: RefUnwindSafe, <X as Cx>::Input: RefUnwindSafe, <X as Cx>::Result: RefUnwindSafe,

§

impl<D, X> Send for SearchGraph<D, X>
where D: Send, <X as Cx>::Input: Send, <X as Cx>::Result: Send,

§

impl<D, X> Sync for SearchGraph<D, X>
where D: Sync, <X as Cx>::Input: Sync, <X as Cx>::Result: Sync,

§

impl<D, X> Unpin for SearchGraph<D, X>
where D: Unpin, <X as Cx>::Input: Unpin, <X as Cx>::Result: Unpin,

§

impl<D, X> UnwindSafe for SearchGraph<D, X>
where <X as Cx>::Input: UnwindSafe, D: UnwindSafe, <X as Cx>::Result: UnwindSafe,

Blanket Implementations§

Source§

impl<T> Aligned for T

Source§

const ALIGN: Alignment

Alignment of Self.
Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T, R> CollectAndApply<T, R> for T

Source§

fn collect_and_apply<I, F>(iter: I, f: F) -> R
where I: Iterator<Item = T>, F: FnOnce(&[T]) -> R,

Equivalent to f(&iter.collect::<Vec<_>>()).

Source§

type Output = R

Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<I, T, U> Upcast<I, U> for T
where U: UpcastFrom<I, T>,

Source§

fn upcast(self, interner: I) -> U

Source§

impl<I, T> UpcastFrom<I, T> for T

Source§

fn upcast_from(from: T, _tcx: I) -> T

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> ErasedDestructor for T
where T: 'static,

Layout§

Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...) attributes. Please see the Rust Reference's “Type Layout” chapter for details on type layout guarantees.

Size: 64 bytes