rustc_mir_build/builder/
scope.rs

1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3we descend the THIR, we push a scope on the stack, build its
4contents, and then pop it off. Every scope is named by a
5`region::Scope`.
6
7### SEME Regions
8
9When pushing a new [Scope], we record the current point in the graph (a
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
20
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
29
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
32manipulate the scheduled drops in this scope to avoid dropping things multiple
33times.
34
35### Drops
36
37The primary purpose for scopes is to insert drops: while building
38the contents, we also accumulate places that need to be dropped upon
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
41those places onto the outgoing edge. Note that we don't know the full
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
46```
47# let cond = true;
48loop {
49    let x = ..;
50    if cond { break; }
51    let y = ..;
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
64early exit occurs, the method `break_scope` is called. It is given the
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
70
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
75
76### Breakable scopes
77
78In addition to the normal scope stack, we track a loop scope stack
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
81
82*/
83
84use std::mem;
85
86use rustc_data_structures::fx::FxHashMap;
87use rustc_hir::HirId;
88use rustc_index::{IndexSlice, IndexVec};
89use rustc_middle::middle::region;
90use rustc_middle::mir::*;
91use rustc_middle::thir::{ExprId, LintLevel};
92use rustc_middle::ty::{self, TyCtxt};
93use rustc_middle::{bug, span_bug};
94use rustc_session::lint::Level;
95use rustc_span::source_map::Spanned;
96use rustc_span::{DUMMY_SP, Span};
97use tracing::{debug, instrument};
98
99use crate::builder::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
100
101#[derive(Debug)]
102pub(crate) struct Scopes<'tcx> {
103    scopes: Vec<Scope>,
104
105    /// The current set of breakable scopes. See module comment for more details.
106    breakable_scopes: Vec<BreakableScope<'tcx>>,
107
108    /// The scope of the innermost if-then currently being lowered.
109    if_then_scope: Option<IfThenScope>,
110
111    /// Drops that need to be done on unwind paths. See the comment on
112    /// [DropTree] for more details.
113    unwind_drops: DropTree,
114
115    /// Drops that need to be done on paths to the `CoroutineDrop` terminator.
116    coroutine_drops: DropTree,
117}
118
119#[derive(Debug)]
120struct Scope {
121    /// The source scope this scope was created in.
122    source_scope: SourceScope,
123
124    /// the region span of this scope within source code.
125    region_scope: region::Scope,
126
127    /// set of places to drop when exiting this scope. This starts
128    /// out empty but grows as variables are declared during the
129    /// building process. This is a stack, so we always drop from the
130    /// end of the vector (top of the stack) first.
131    drops: Vec<DropData>,
132
133    moved_locals: Vec<Local>,
134
135    /// The drop index that will drop everything in and below this scope on an
136    /// unwind path.
137    cached_unwind_block: Option<DropIdx>,
138
139    /// The drop index that will drop everything in and below this scope on a
140    /// coroutine drop path.
141    cached_coroutine_drop_block: Option<DropIdx>,
142}
143
144#[derive(Clone, Copy, Debug)]
145struct DropData {
146    /// The `Span` where drop obligation was incurred (typically where place was
147    /// declared)
148    source_info: SourceInfo,
149
150    /// local to drop
151    local: Local,
152
153    /// Whether this is a value Drop or a StorageDead.
154    kind: DropKind,
155}
156
157#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
158pub(crate) enum DropKind {
159    Value,
160    Storage,
161    ForLint,
162}
163
164#[derive(Debug)]
165struct BreakableScope<'tcx> {
166    /// Region scope of the loop
167    region_scope: region::Scope,
168    /// The destination of the loop/block expression itself (i.e., where to put
169    /// the result of a `break` or `return` expression)
170    break_destination: Place<'tcx>,
171    /// Drops that happen on the `break`/`return` path.
172    break_drops: DropTree,
173    /// Drops that happen on the `continue` path.
174    continue_drops: Option<DropTree>,
175}
176
177#[derive(Debug)]
178struct IfThenScope {
179    /// The if-then scope or arm scope
180    region_scope: region::Scope,
181    /// Drops that happen on the `else` path.
182    else_drops: DropTree,
183}
184
185/// The target of an expression that breaks out of a scope
186#[derive(Clone, Copy, Debug)]
187pub(crate) enum BreakableTarget {
188    Continue(region::Scope),
189    Break(region::Scope),
190    Return,
191}
192
193rustc_index::newtype_index! {
194    #[orderable]
195    struct DropIdx {}
196}
197
198const ROOT_NODE: DropIdx = DropIdx::ZERO;
199
200/// A tree of drops that we have deferred lowering. It's used for:
201///
202/// * Drops on unwind paths
203/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
204/// * Drops on return and loop exit paths
205/// * Drops on the else path in an `if let` chain
206///
207/// Once no more nodes could be added to the tree, we lower it to MIR in one go
208/// in `build_mir`.
209#[derive(Debug)]
210struct DropTree {
211    /// Nodes in the drop tree, containing drop data and a link to the next node.
212    drop_nodes: IndexVec<DropIdx, DropNode>,
213    /// Map for finding the index of an existing node, given its contents.
214    existing_drops_map: FxHashMap<DropNodeKey, DropIdx>,
215    /// Edges into the `DropTree` that need to be added once it's lowered.
216    entry_points: Vec<(DropIdx, BasicBlock)>,
217}
218
219/// A single node in the drop tree.
220#[derive(Debug)]
221struct DropNode {
222    /// Info about the drop to be performed at this node in the drop tree.
223    data: DropData,
224    /// Index of the "next" drop to perform (in drop order, not declaration order).
225    next: DropIdx,
226}
227
228/// Subset of [`DropNode`] used for reverse lookup in a hash table.
229#[derive(Debug, PartialEq, Eq, Hash)]
230struct DropNodeKey {
231    next: DropIdx,
232    local: Local,
233}
234
235impl Scope {
236    /// Whether there's anything to do for the cleanup path, that is,
237    /// when unwinding through this scope. This includes destructors,
238    /// but not StorageDead statements, which don't get emitted at all
239    /// for unwinding, for several reasons:
240    ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
241    ///  * LLVM's memory dependency analysis can't handle it atm
242    ///  * polluting the cleanup MIR with StorageDead creates
243    ///    landing pads even though there's no actual destructors
244    ///  * freeing up stack space has no effect during unwinding
245    /// Note that for coroutines we do emit StorageDeads, for the
246    /// use of optimizations in the MIR coroutine transform.
247    fn needs_cleanup(&self) -> bool {
248        self.drops.iter().any(|drop| match drop.kind {
249            DropKind::Value | DropKind::ForLint => true,
250            DropKind::Storage => false,
251        })
252    }
253
254    fn invalidate_cache(&mut self) {
255        self.cached_unwind_block = None;
256        self.cached_coroutine_drop_block = None;
257    }
258}
259
260/// A trait that determined how [DropTree] creates its blocks and
261/// links to any entry nodes.
262trait DropTreeBuilder<'tcx> {
263    /// Create a new block for the tree. This should call either
264    /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
265    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
266
267    /// Links a block outside the drop tree, `from`, to the block `to` inside
268    /// the drop tree.
269    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
270}
271
272impl DropTree {
273    fn new() -> Self {
274        // The root node of the tree doesn't represent a drop, but instead
275        // represents the block in the tree that should be jumped to once all
276        // of the required drops have been performed.
277        let fake_source_info = SourceInfo::outermost(DUMMY_SP);
278        let fake_data =
279            DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
280        let drop_nodes = IndexVec::from_raw(vec![DropNode { data: fake_data, next: DropIdx::MAX }]);
281        Self { drop_nodes, entry_points: Vec::new(), existing_drops_map: FxHashMap::default() }
282    }
283
284    /// Adds a node to the drop tree, consisting of drop data and the index of
285    /// the "next" drop (in drop order), which could be the sentinel [`ROOT_NODE`].
286    ///
287    /// If there is already an equivalent node in the tree, nothing is added, and
288    /// that node's index is returned. Otherwise, the new node's index is returned.
289    fn add_drop(&mut self, data: DropData, next: DropIdx) -> DropIdx {
290        let drop_nodes = &mut self.drop_nodes;
291        *self
292            .existing_drops_map
293            .entry(DropNodeKey { next, local: data.local })
294            // Create a new node, and also add its index to the map.
295            .or_insert_with(|| drop_nodes.push(DropNode { data, next }))
296    }
297
298    /// Registers `from` as an entry point to this drop tree, at `to`.
299    ///
300    /// During [`Self::build_mir`], `from` will be linked to the corresponding
301    /// block within the drop tree.
302    fn add_entry_point(&mut self, from: BasicBlock, to: DropIdx) {
303        debug_assert!(to < self.drop_nodes.next_index());
304        self.entry_points.push((to, from));
305    }
306
307    /// Builds the MIR for a given drop tree.
308    fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
309        &mut self,
310        cfg: &mut CFG<'tcx>,
311        root_node: Option<BasicBlock>,
312    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
313        debug!("DropTree::build_mir(drops = {:#?})", self);
314
315        let mut blocks = self.assign_blocks::<T>(cfg, root_node);
316        self.link_blocks(cfg, &mut blocks);
317
318        blocks
319    }
320
321    /// Assign blocks for all of the drops in the drop tree that need them.
322    fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
323        &mut self,
324        cfg: &mut CFG<'tcx>,
325        root_node: Option<BasicBlock>,
326    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
327        // StorageDead statements can share blocks with each other and also with
328        // a Drop terminator. We iterate through the drops to find which drops
329        // need their own block.
330        #[derive(Clone, Copy)]
331        enum Block {
332            // This drop is unreachable
333            None,
334            // This drop is only reachable through the `StorageDead` with the
335            // specified index.
336            Shares(DropIdx),
337            // This drop has more than one way of being reached, or it is
338            // branched to from outside the tree, or its predecessor is a
339            // `Value` drop.
340            Own,
341        }
342
343        let mut blocks = IndexVec::from_elem(None, &self.drop_nodes);
344        blocks[ROOT_NODE] = root_node;
345
346        let mut needs_block = IndexVec::from_elem(Block::None, &self.drop_nodes);
347        if root_node.is_some() {
348            // In some cases (such as drops for `continue`) the root node
349            // already has a block. In this case, make sure that we don't
350            // override it.
351            needs_block[ROOT_NODE] = Block::Own;
352        }
353
354        // Sort so that we only need to check the last value.
355        let entry_points = &mut self.entry_points;
356        entry_points.sort();
357
358        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
359            if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
360                let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
361                needs_block[drop_idx] = Block::Own;
362                while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
363                    let entry_block = entry_points.pop().unwrap().1;
364                    T::link_entry_point(cfg, entry_block, block);
365                }
366            }
367            match needs_block[drop_idx] {
368                Block::None => continue,
369                Block::Own => {
370                    blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
371                }
372                Block::Shares(pred) => {
373                    blocks[drop_idx] = blocks[pred];
374                }
375            }
376            if let DropKind::Value = drop_node.data.kind {
377                needs_block[drop_node.next] = Block::Own;
378            } else if drop_idx != ROOT_NODE {
379                match &mut needs_block[drop_node.next] {
380                    pred @ Block::None => *pred = Block::Shares(drop_idx),
381                    pred @ Block::Shares(_) => *pred = Block::Own,
382                    Block::Own => (),
383                }
384            }
385        }
386
387        debug!("assign_blocks: blocks = {:#?}", blocks);
388        assert!(entry_points.is_empty());
389
390        blocks
391    }
392
393    fn link_blocks<'tcx>(
394        &self,
395        cfg: &mut CFG<'tcx>,
396        blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
397    ) {
398        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
399            let Some(block) = blocks[drop_idx] else { continue };
400            match drop_node.data.kind {
401                DropKind::Value => {
402                    let terminator = TerminatorKind::Drop {
403                        target: blocks[drop_node.next].unwrap(),
404                        // The caller will handle this if needed.
405                        unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
406                        place: drop_node.data.local.into(),
407                        replace: false,
408                        drop: None,
409                        async_fut: None,
410                    };
411                    cfg.terminate(block, drop_node.data.source_info, terminator);
412                }
413                DropKind::ForLint => {
414                    let stmt = Statement {
415                        source_info: drop_node.data.source_info,
416                        kind: StatementKind::BackwardIncompatibleDropHint {
417                            place: Box::new(drop_node.data.local.into()),
418                            reason: BackwardIncompatibleDropReason::Edition2024,
419                        },
420                    };
421                    cfg.push(block, stmt);
422                    let target = blocks[drop_node.next].unwrap();
423                    if target != block {
424                        // Diagnostics don't use this `Span` but debuginfo
425                        // might. Since we don't want breakpoints to be placed
426                        // here, especially when this is on an unwind path, we
427                        // use `DUMMY_SP`.
428                        let source_info =
429                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
430                        let terminator = TerminatorKind::Goto { target };
431                        cfg.terminate(block, source_info, terminator);
432                    }
433                }
434                // Root nodes don't correspond to a drop.
435                DropKind::Storage if drop_idx == ROOT_NODE => {}
436                DropKind::Storage => {
437                    let stmt = Statement {
438                        source_info: drop_node.data.source_info,
439                        kind: StatementKind::StorageDead(drop_node.data.local),
440                    };
441                    cfg.push(block, stmt);
442                    let target = blocks[drop_node.next].unwrap();
443                    if target != block {
444                        // Diagnostics don't use this `Span` but debuginfo
445                        // might. Since we don't want breakpoints to be placed
446                        // here, especially when this is on an unwind path, we
447                        // use `DUMMY_SP`.
448                        let source_info =
449                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
450                        let terminator = TerminatorKind::Goto { target };
451                        cfg.terminate(block, source_info, terminator);
452                    }
453                }
454            }
455        }
456    }
457}
458
459impl<'tcx> Scopes<'tcx> {
460    pub(crate) fn new() -> Self {
461        Self {
462            scopes: Vec::new(),
463            breakable_scopes: Vec::new(),
464            if_then_scope: None,
465            unwind_drops: DropTree::new(),
466            coroutine_drops: DropTree::new(),
467        }
468    }
469
470    fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
471        debug!("push_scope({:?})", region_scope);
472        self.scopes.push(Scope {
473            source_scope: vis_scope,
474            region_scope: region_scope.0,
475            drops: vec![],
476            moved_locals: vec![],
477            cached_unwind_block: None,
478            cached_coroutine_drop_block: None,
479        });
480    }
481
482    fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
483        let scope = self.scopes.pop().unwrap();
484        assert_eq!(scope.region_scope, region_scope.0);
485        scope
486    }
487
488    fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
489        self.scopes
490            .iter()
491            .rposition(|scope| scope.region_scope == region_scope)
492            .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
493    }
494
495    /// Returns the topmost active scope, which is known to be alive until
496    /// the next scope expression.
497    fn topmost(&self) -> region::Scope {
498        self.scopes.last().expect("topmost_scope: no scopes present").region_scope
499    }
500}
501
502impl<'a, 'tcx> Builder<'a, 'tcx> {
503    // Adding and removing scopes
504    // ==========================
505
506    ///  Start a breakable scope, which tracks where `continue`, `break` and
507    ///  `return` should branch to.
508    pub(crate) fn in_breakable_scope<F>(
509        &mut self,
510        loop_block: Option<BasicBlock>,
511        break_destination: Place<'tcx>,
512        span: Span,
513        f: F,
514    ) -> BlockAnd<()>
515    where
516        F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
517    {
518        let region_scope = self.scopes.topmost();
519        let scope = BreakableScope {
520            region_scope,
521            break_destination,
522            break_drops: DropTree::new(),
523            continue_drops: loop_block.map(|_| DropTree::new()),
524        };
525        self.scopes.breakable_scopes.push(scope);
526        let normal_exit_block = f(self);
527        let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
528        assert!(breakable_scope.region_scope == region_scope);
529        let break_block =
530            self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
531        if let Some(drops) = breakable_scope.continue_drops {
532            self.build_exit_tree(drops, region_scope, span, loop_block);
533        }
534        match (normal_exit_block, break_block) {
535            (Some(block), None) | (None, Some(block)) => block,
536            (None, None) => self.cfg.start_new_block().unit(),
537            (Some(normal_block), Some(exit_block)) => {
538                let target = self.cfg.start_new_block();
539                let source_info = self.source_info(span);
540                self.cfg.terminate(
541                    normal_block.into_block(),
542                    source_info,
543                    TerminatorKind::Goto { target },
544                );
545                self.cfg.terminate(
546                    exit_block.into_block(),
547                    source_info,
548                    TerminatorKind::Goto { target },
549                );
550                target.unit()
551            }
552        }
553    }
554
555    /// Start an if-then scope which tracks drop for `if` expressions and `if`
556    /// guards.
557    ///
558    /// For an if-let chain:
559    ///
560    /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
561    ///
562    /// There are three possible ways the condition can be false and we may have
563    /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
564    /// To handle this correctly we use a `DropTree` in a similar way to a
565    /// `loop` expression and 'break' out on all of the 'else' paths.
566    ///
567    /// Notes:
568    /// - We don't need to keep a stack of scopes in the `Builder` because the
569    ///   'else' paths will only leave the innermost scope.
570    /// - This is also used for match guards.
571    pub(crate) fn in_if_then_scope<F>(
572        &mut self,
573        region_scope: region::Scope,
574        span: Span,
575        f: F,
576    ) -> (BasicBlock, BasicBlock)
577    where
578        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
579    {
580        let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
581        let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
582
583        let then_block = f(self).into_block();
584
585        let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
586        assert!(if_then_scope.region_scope == region_scope);
587
588        let else_block =
589            self.build_exit_tree(if_then_scope.else_drops, region_scope, span, None).map_or_else(
590                || self.cfg.start_new_block(),
591                |else_block_and| else_block_and.into_block(),
592            );
593
594        (then_block, else_block)
595    }
596
597    /// Convenience wrapper that pushes a scope and then executes `f`
598    /// to build its contents, popping the scope afterwards.
599    #[instrument(skip(self, f), level = "debug")]
600    pub(crate) fn in_scope<F, R>(
601        &mut self,
602        region_scope: (region::Scope, SourceInfo),
603        lint_level: LintLevel,
604        f: F,
605    ) -> BlockAnd<R>
606    where
607        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
608    {
609        let source_scope = self.source_scope;
610        if let LintLevel::Explicit(current_hir_id) = lint_level {
611            let parent_id =
612                self.source_scopes[source_scope].local_data.as_ref().unwrap_crate_local().lint_root;
613            self.maybe_new_source_scope(region_scope.1.span, current_hir_id, parent_id);
614        }
615        self.push_scope(region_scope);
616        let mut block;
617        let rv = unpack!(block = f(self));
618        block = self.pop_scope(region_scope, block).into_block();
619        self.source_scope = source_scope;
620        debug!(?block);
621        block.and(rv)
622    }
623
624    /// Push a scope onto the stack. You can then build code in this
625    /// scope and call `pop_scope` afterwards. Note that these two
626    /// calls must be paired; using `in_scope` as a convenience
627    /// wrapper maybe preferable.
628    pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
629        self.scopes.push_scope(region_scope, self.source_scope);
630    }
631
632    /// Pops a scope, which should have region scope `region_scope`,
633    /// adding any drops onto the end of `block` that are needed.
634    /// This must match 1-to-1 with `push_scope`.
635    pub(crate) fn pop_scope(
636        &mut self,
637        region_scope: (region::Scope, SourceInfo),
638        mut block: BasicBlock,
639    ) -> BlockAnd<()> {
640        debug!("pop_scope({:?}, {:?})", region_scope, block);
641
642        block = self.leave_top_scope(block);
643
644        self.scopes.pop_scope(region_scope);
645
646        block.unit()
647    }
648
649    /// Sets up the drops for breaking from `block` to `target`.
650    pub(crate) fn break_scope(
651        &mut self,
652        mut block: BasicBlock,
653        value: Option<ExprId>,
654        target: BreakableTarget,
655        source_info: SourceInfo,
656    ) -> BlockAnd<()> {
657        let span = source_info.span;
658
659        let get_scope_index = |scope: region::Scope| {
660            // find the loop-scope by its `region::Scope`.
661            self.scopes
662                .breakable_scopes
663                .iter()
664                .rposition(|breakable_scope| breakable_scope.region_scope == scope)
665                .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
666        };
667        let (break_index, destination) = match target {
668            BreakableTarget::Return => {
669                let scope = &self.scopes.breakable_scopes[0];
670                if scope.break_destination != Place::return_place() {
671                    span_bug!(span, "`return` in item with no return scope");
672                }
673                (0, Some(scope.break_destination))
674            }
675            BreakableTarget::Break(scope) => {
676                let break_index = get_scope_index(scope);
677                let scope = &self.scopes.breakable_scopes[break_index];
678                (break_index, Some(scope.break_destination))
679            }
680            BreakableTarget::Continue(scope) => {
681                let break_index = get_scope_index(scope);
682                (break_index, None)
683            }
684        };
685
686        match (destination, value) {
687            (Some(destination), Some(value)) => {
688                debug!("stmt_expr Break val block_context.push(SubExpr)");
689                self.block_context.push(BlockFrame::SubExpr);
690                block = self.expr_into_dest(destination, block, value).into_block();
691                self.block_context.pop();
692            }
693            (Some(destination), None) => {
694                self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
695            }
696            (None, Some(_)) => {
697                panic!("`return`, `become` and `break` with value and must have a destination")
698            }
699            (None, None) => {
700                if self.tcx.sess.instrument_coverage() {
701                    // Normally we wouldn't build any MIR in this case, but that makes it
702                    // harder for coverage instrumentation to extract a relevant span for
703                    // `continue` expressions. So here we inject a dummy statement with the
704                    // desired span.
705                    self.cfg.push_coverage_span_marker(block, source_info);
706                }
707            }
708        }
709
710        let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
711        let scope_index = self.scopes.scope_index(region_scope, span);
712        let drops = if destination.is_some() {
713            &mut self.scopes.breakable_scopes[break_index].break_drops
714        } else {
715            let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
716            else {
717                self.tcx.dcx().span_delayed_bug(
718                    source_info.span,
719                    "unlabelled `continue` within labelled block",
720                );
721                self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
722
723                return self.cfg.start_new_block().unit();
724            };
725            drops
726        };
727
728        let mut drop_idx = ROOT_NODE;
729        for scope in &self.scopes.scopes[scope_index + 1..] {
730            for drop in &scope.drops {
731                drop_idx = drops.add_drop(*drop, drop_idx);
732            }
733        }
734        drops.add_entry_point(block, drop_idx);
735
736        // `build_drop_trees` doesn't have access to our source_info, so we
737        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
738        // because MIR type checking will panic if it hasn't been overwritten.
739        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
740        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
741
742        self.cfg.start_new_block().unit()
743    }
744
745    /// Sets up the drops for breaking from `block` due to an `if` condition
746    /// that turned out to be false.
747    ///
748    /// Must be called in the context of [`Builder::in_if_then_scope`], so that
749    /// there is an if-then scope to tell us what the target scope is.
750    pub(crate) fn break_for_else(&mut self, block: BasicBlock, source_info: SourceInfo) {
751        let if_then_scope = self
752            .scopes
753            .if_then_scope
754            .as_ref()
755            .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
756
757        let target = if_then_scope.region_scope;
758        let scope_index = self.scopes.scope_index(target, source_info.span);
759
760        // Upgrade `if_then_scope` to `&mut`.
761        let if_then_scope = self.scopes.if_then_scope.as_mut().expect("upgrading & to &mut");
762
763        let mut drop_idx = ROOT_NODE;
764        let drops = &mut if_then_scope.else_drops;
765        for scope in &self.scopes.scopes[scope_index + 1..] {
766            for drop in &scope.drops {
767                drop_idx = drops.add_drop(*drop, drop_idx);
768            }
769        }
770        drops.add_entry_point(block, drop_idx);
771
772        // `build_drop_trees` doesn't have access to our source_info, so we
773        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
774        // because MIR type checking will panic if it hasn't been overwritten.
775        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
776        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
777    }
778
779    /// Sets up the drops for explicit tail calls.
780    ///
781    /// Unlike other kinds of early exits, tail calls do not go through the drop tree.
782    /// Instead, all scheduled drops are immediately added to the CFG.
783    pub(crate) fn break_for_tail_call(
784        &mut self,
785        mut block: BasicBlock,
786        args: &[Spanned<Operand<'tcx>>],
787        source_info: SourceInfo,
788    ) -> BlockAnd<()> {
789        let arg_drops: Vec<_> = args
790            .iter()
791            .rev()
792            .filter_map(|arg| match &arg.node {
793                Operand::Copy(_) => bug!("copy op in tail call args"),
794                Operand::Move(place) => {
795                    let local =
796                        place.as_local().unwrap_or_else(|| bug!("projection in tail call args"));
797
798                    if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
799                        return None;
800                    }
801
802                    Some(DropData { source_info, local, kind: DropKind::Value })
803                }
804                Operand::Constant(_) => None,
805            })
806            .collect();
807
808        let mut unwind_to = self.diverge_cleanup_target(
809            self.scopes.scopes.iter().rev().nth(1).unwrap().region_scope,
810            DUMMY_SP,
811        );
812        let typing_env = self.typing_env();
813        let unwind_drops = &mut self.scopes.unwind_drops;
814
815        // the innermost scope contains only the destructors for the tail call arguments
816        // we only want to drop these in case of a panic, so we skip it
817        for scope in self.scopes.scopes[1..].iter().rev().skip(1) {
818            // FIXME(explicit_tail_calls) code duplication with `build_scope_drops`
819            for drop_data in scope.drops.iter().rev() {
820                let source_info = drop_data.source_info;
821                let local = drop_data.local;
822
823                if !self.local_decls[local].ty.needs_drop(self.tcx, typing_env) {
824                    continue;
825                }
826
827                match drop_data.kind {
828                    DropKind::Value => {
829                        // `unwind_to` should drop the value that we're about to
830                        // schedule. If dropping this value panics, then we continue
831                        // with the *next* value on the unwind path.
832                        debug_assert_eq!(
833                            unwind_drops.drop_nodes[unwind_to].data.local,
834                            drop_data.local
835                        );
836                        debug_assert_eq!(
837                            unwind_drops.drop_nodes[unwind_to].data.kind,
838                            drop_data.kind
839                        );
840                        unwind_to = unwind_drops.drop_nodes[unwind_to].next;
841
842                        let mut unwind_entry_point = unwind_to;
843
844                        // the tail call arguments must be dropped if any of these drops panic
845                        for drop in arg_drops.iter().copied() {
846                            unwind_entry_point = unwind_drops.add_drop(drop, unwind_entry_point);
847                        }
848
849                        unwind_drops.add_entry_point(block, unwind_entry_point);
850
851                        let next = self.cfg.start_new_block();
852                        self.cfg.terminate(
853                            block,
854                            source_info,
855                            TerminatorKind::Drop {
856                                place: local.into(),
857                                target: next,
858                                unwind: UnwindAction::Continue,
859                                replace: false,
860                                drop: None,
861                                async_fut: None,
862                            },
863                        );
864                        block = next;
865                    }
866                    DropKind::ForLint => {
867                        self.cfg.push(
868                            block,
869                            Statement {
870                                source_info,
871                                kind: StatementKind::BackwardIncompatibleDropHint {
872                                    place: Box::new(local.into()),
873                                    reason: BackwardIncompatibleDropReason::Edition2024,
874                                },
875                            },
876                        );
877                    }
878                    DropKind::Storage => {
879                        // Only temps and vars need their storage dead.
880                        assert!(local.index() > self.arg_count);
881                        self.cfg.push(
882                            block,
883                            Statement { source_info, kind: StatementKind::StorageDead(local) },
884                        );
885                    }
886                }
887            }
888        }
889
890        block.unit()
891    }
892
893    fn is_async_drop_impl(
894        tcx: TyCtxt<'tcx>,
895        local_decls: &IndexVec<Local, LocalDecl<'tcx>>,
896        typing_env: ty::TypingEnv<'tcx>,
897        local: Local,
898    ) -> bool {
899        let ty = local_decls[local].ty;
900        if ty.is_async_drop(tcx, typing_env) || ty.is_coroutine() {
901            return true;
902        }
903        ty.needs_async_drop(tcx, typing_env)
904    }
905    fn is_async_drop(&self, local: Local) -> bool {
906        Self::is_async_drop_impl(self.tcx, &self.local_decls, self.typing_env(), local)
907    }
908
909    fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
910        // If we are emitting a `drop` statement, we need to have the cached
911        // diverge cleanup pads ready in case that drop panics.
912        let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
913        let is_coroutine = self.coroutine.is_some();
914        let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
915
916        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
917        let has_async_drops = is_coroutine
918            && scope.drops.iter().any(|v| v.kind == DropKind::Value && self.is_async_drop(v.local));
919        let dropline_to = if has_async_drops { Some(self.diverge_dropline()) } else { None };
920        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
921        let typing_env = self.typing_env();
922        build_scope_drops(
923            &mut self.cfg,
924            &mut self.scopes.unwind_drops,
925            &mut self.scopes.coroutine_drops,
926            scope,
927            block,
928            unwind_to,
929            dropline_to,
930            is_coroutine && needs_cleanup,
931            self.arg_count,
932            |v: Local| Self::is_async_drop_impl(self.tcx, &self.local_decls, typing_env, v),
933        )
934        .into_block()
935    }
936
937    /// Possibly creates a new source scope if `current_root` and `parent_root`
938    /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
939    pub(crate) fn maybe_new_source_scope(
940        &mut self,
941        span: Span,
942        current_id: HirId,
943        parent_id: HirId,
944    ) {
945        let (current_root, parent_root) =
946            if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
947                // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
948                // the only part of rustc that tracks MIR -> HIR is the
949                // `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
950                // locations. Normally the number of source scopes is limited to the set of nodes
951                // with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
952                // behavior to maximize the number of source scopes, increasing the granularity of
953                // the MIR->HIR mapping.
954                (current_id, parent_id)
955            } else {
956                // Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
957                // parents. We estimate the true lint roots here to avoid creating a lot of source
958                // scopes.
959                (
960                    self.maybe_lint_level_root_bounded(current_id),
961                    if parent_id == self.hir_id {
962                        parent_id // this is very common
963                    } else {
964                        self.maybe_lint_level_root_bounded(parent_id)
965                    },
966                )
967            };
968
969        if current_root != parent_root {
970            let lint_level = LintLevel::Explicit(current_root);
971            self.source_scope = self.new_source_scope(span, lint_level);
972        }
973    }
974
975    /// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
976    /// It stops at `self.hir_id` and just returns it if reached.
977    fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
978        // This assertion lets us just store `ItemLocalId` in the cache, rather
979        // than the full `HirId`.
980        assert_eq!(orig_id.owner, self.hir_id.owner);
981
982        let mut id = orig_id;
983        loop {
984            if id == self.hir_id {
985                // This is a moderately common case, mostly hit for previously unseen nodes.
986                break;
987            }
988
989            if self.tcx.hir_attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
990                // This is a rare case. It's for a node path that doesn't reach the root due to an
991                // intervening lint level attribute. This result doesn't get cached.
992                return id;
993            }
994
995            let next = self.tcx.parent_hir_id(id);
996            if next == id {
997                bug!("lint traversal reached the root of the crate");
998            }
999            id = next;
1000
1001            // This lookup is just an optimization; it can be removed without affecting
1002            // functionality. It might seem strange to see this at the end of this loop, but the
1003            // `orig_id` passed in to this function is almost always previously unseen, for which a
1004            // lookup will be a miss. So we only do lookups for nodes up the parent chain, where
1005            // cache lookups have a very high hit rate.
1006            if self.lint_level_roots_cache.contains(id.local_id) {
1007                break;
1008            }
1009        }
1010
1011        // `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
1012        // rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
1013        // The performance wins from the cache all come from caching non-leaf nodes.
1014        self.lint_level_roots_cache.insert(orig_id.local_id);
1015        self.hir_id
1016    }
1017
1018    /// Creates a new source scope, nested in the current one.
1019    pub(crate) fn new_source_scope(&mut self, span: Span, lint_level: LintLevel) -> SourceScope {
1020        let parent = self.source_scope;
1021        debug!(
1022            "new_source_scope({:?}, {:?}) - parent({:?})={:?}",
1023            span,
1024            lint_level,
1025            parent,
1026            self.source_scopes.get(parent)
1027        );
1028        let scope_local_data = SourceScopeLocalData {
1029            lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
1030                lint_root
1031            } else {
1032                self.source_scopes[parent].local_data.as_ref().unwrap_crate_local().lint_root
1033            },
1034        };
1035        self.source_scopes.push(SourceScopeData {
1036            span,
1037            parent_scope: Some(parent),
1038            inlined: None,
1039            inlined_parent_scope: None,
1040            local_data: ClearCrossCrate::Set(scope_local_data),
1041        })
1042    }
1043
1044    /// Given a span and the current source scope, make a SourceInfo.
1045    pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
1046        SourceInfo { span, scope: self.source_scope }
1047    }
1048
1049    // Finding scopes
1050    // ==============
1051
1052    /// Returns the scope that we should use as the lifetime of an
1053    /// operand. Basically, an operand must live until it is consumed.
1054    /// This is similar to, but not quite the same as, the temporary
1055    /// scope (which can be larger or smaller).
1056    ///
1057    /// Consider:
1058    /// ```ignore (illustrative)
1059    /// let x = foo(bar(X, Y));
1060    /// ```
1061    /// We wish to pop the storage for X and Y after `bar()` is
1062    /// called, not after the whole `let` is completed.
1063    ///
1064    /// As another example, if the second argument diverges:
1065    /// ```ignore (illustrative)
1066    /// foo(Box::new(2), panic!())
1067    /// ```
1068    /// We would allocate the box but then free it on the unwinding
1069    /// path; we would also emit a free on the 'success' path from
1070    /// panic, but that will turn out to be removed as dead-code.
1071    pub(crate) fn local_scope(&self) -> region::Scope {
1072        self.scopes.topmost()
1073    }
1074
1075    // Scheduling drops
1076    // ================
1077
1078    pub(crate) fn schedule_drop_storage_and_value(
1079        &mut self,
1080        span: Span,
1081        region_scope: region::Scope,
1082        local: Local,
1083    ) {
1084        self.schedule_drop(span, region_scope, local, DropKind::Storage);
1085        self.schedule_drop(span, region_scope, local, DropKind::Value);
1086    }
1087
1088    /// Indicates that `place` should be dropped on exit from `region_scope`.
1089    ///
1090    /// When called with `DropKind::Storage`, `place` shouldn't be the return
1091    /// place, or a function parameter.
1092    pub(crate) fn schedule_drop(
1093        &mut self,
1094        span: Span,
1095        region_scope: region::Scope,
1096        local: Local,
1097        drop_kind: DropKind,
1098    ) {
1099        let needs_drop = match drop_kind {
1100            DropKind::Value | DropKind::ForLint => {
1101                if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1102                    return;
1103                }
1104                true
1105            }
1106            DropKind::Storage => {
1107                if local.index() <= self.arg_count {
1108                    span_bug!(
1109                        span,
1110                        "`schedule_drop` called with body argument {:?} \
1111                        but its storage does not require a drop",
1112                        local,
1113                    )
1114                }
1115                false
1116            }
1117        };
1118
1119        // When building drops, we try to cache chains of drops to reduce the
1120        // number of `DropTree::add_drop` calls. This, however, means that
1121        // whenever we add a drop into a scope which already had some entries
1122        // in the drop tree built (and thus, cached) for it, we must invalidate
1123        // all caches which might branch into the scope which had a drop just
1124        // added to it. This is necessary, because otherwise some other code
1125        // might use the cache to branch into already built chain of drops,
1126        // essentially ignoring the newly added drop.
1127        //
1128        // For example consider there’s two scopes with a drop in each. These
1129        // are built and thus the caches are filled:
1130        //
1131        // +--------------------------------------------------------+
1132        // | +---------------------------------+                    |
1133        // | | +--------+     +-------------+  |  +---------------+ |
1134        // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
1135        // | | +--------+     +-------------+  |  +---------------+ |
1136        // | +------------|outer_scope cache|--+                    |
1137        // +------------------------------|middle_scope cache|------+
1138        //
1139        // Now, a new, innermost scope is added along with a new drop into
1140        // both innermost and outermost scopes:
1141        //
1142        // +------------------------------------------------------------+
1143        // | +----------------------------------+                       |
1144        // | | +--------+      +-------------+  |   +---------------+   | +-------------+
1145        // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
1146        // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
1147        // | |             +-+ +-------------+  |                       |
1148        // | +---|invalid outer_scope cache|----+                       |
1149        // +----=----------------|invalid middle_scope cache|-----------+
1150        //
1151        // If, when adding `drop(new)` we do not invalidate the cached blocks for both
1152        // outer_scope and middle_scope, then, when building drops for the inner (rightmost)
1153        // scope, the old, cached blocks, without `drop(new)` will get used, producing the
1154        // wrong results.
1155        //
1156        // Note that this code iterates scopes from the innermost to the outermost,
1157        // invalidating caches of each scope visited. This way bare minimum of the
1158        // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
1159        // cache of outer scope stays intact.
1160        //
1161        // Since we only cache drops for the unwind path and the coroutine drop
1162        // path, we only need to invalidate the cache for drops that happen on
1163        // the unwind or coroutine drop paths. This means that for
1164        // non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
1165        let invalidate_caches = needs_drop || self.coroutine.is_some();
1166        for scope in self.scopes.scopes.iter_mut().rev() {
1167            if invalidate_caches {
1168                scope.invalidate_cache();
1169            }
1170
1171            if scope.region_scope == region_scope {
1172                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1173                // Attribute scope exit drops to scope's closing brace.
1174                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1175
1176                scope.drops.push(DropData {
1177                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1178                    local,
1179                    kind: drop_kind,
1180                });
1181
1182                return;
1183            }
1184        }
1185
1186        span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
1187    }
1188
1189    /// Schedule emission of a backwards incompatible drop lint hint.
1190    /// Applicable only to temporary values for now.
1191    #[instrument(level = "debug", skip(self))]
1192    pub(crate) fn schedule_backwards_incompatible_drop(
1193        &mut self,
1194        span: Span,
1195        region_scope: region::Scope,
1196        local: Local,
1197    ) {
1198        // Note that we are *not* gating BIDs here on whether they have significant destructor.
1199        // We need to know all of them so that we can capture potential borrow-checking errors.
1200        for scope in self.scopes.scopes.iter_mut().rev() {
1201            // Since we are inserting linting MIR statement, we have to invalidate the caches
1202            scope.invalidate_cache();
1203            if scope.region_scope == region_scope {
1204                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1205                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1206
1207                scope.drops.push(DropData {
1208                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1209                    local,
1210                    kind: DropKind::ForLint,
1211                });
1212
1213                return;
1214            }
1215        }
1216        span_bug!(
1217            span,
1218            "region scope {:?} not in scope to drop {:?} for linting",
1219            region_scope,
1220            local
1221        );
1222    }
1223
1224    /// Indicates that the "local operand" stored in `local` is
1225    /// *moved* at some point during execution (see `local_scope` for
1226    /// more information about what a "local operand" is -- in short,
1227    /// it's an intermediate operand created as part of preparing some
1228    /// MIR instruction). We use this information to suppress
1229    /// redundant drops on the non-unwind paths. This results in less
1230    /// MIR, but also avoids spurious borrow check errors
1231    /// (c.f. #64391).
1232    ///
1233    /// Example: when compiling the call to `foo` here:
1234    ///
1235    /// ```ignore (illustrative)
1236    /// foo(bar(), ...)
1237    /// ```
1238    ///
1239    /// we would evaluate `bar()` to an operand `_X`. We would also
1240    /// schedule `_X` to be dropped when the expression scope for
1241    /// `foo(bar())` is exited. This is relevant, for example, if the
1242    /// later arguments should unwind (it would ensure that `_X` gets
1243    /// dropped). However, if no unwind occurs, then `_X` will be
1244    /// unconditionally consumed by the `call`:
1245    ///
1246    /// ```ignore (illustrative)
1247    /// bb {
1248    ///   ...
1249    ///   _R = CALL(foo, _X, ...)
1250    /// }
1251    /// ```
1252    ///
1253    /// However, `_X` is still registered to be dropped, and so if we
1254    /// do nothing else, we would generate a `DROP(_X)` that occurs
1255    /// after the call. This will later be optimized out by the
1256    /// drop-elaboration code, but in the meantime it can lead to
1257    /// spurious borrow-check errors -- the problem, ironically, is
1258    /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1259    /// that it creates. See #64391 for an example.
1260    pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
1261        let local_scope = self.local_scope();
1262        let scope = self.scopes.scopes.last_mut().unwrap();
1263
1264        assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1265
1266        // look for moves of a local variable, like `MOVE(_X)`
1267        let locals_moved = operands.iter().flat_map(|operand| match operand.node {
1268            Operand::Copy(_) | Operand::Constant(_) => None,
1269            Operand::Move(place) => place.as_local(),
1270        });
1271
1272        for local in locals_moved {
1273            // check if we have a Drop for this operand and -- if so
1274            // -- add it to the list of moved operands. Note that this
1275            // local might not have been an operand created for this
1276            // call, it could come from other places too.
1277            if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1278                scope.moved_locals.push(local);
1279            }
1280        }
1281    }
1282
1283    // Other
1284    // =====
1285
1286    /// Returns the [DropIdx] for the innermost drop if the function unwound at
1287    /// this point. The `DropIdx` will be created if it doesn't already exist.
1288    fn diverge_cleanup(&mut self) -> DropIdx {
1289        // It is okay to use dummy span because the getting scope index on the topmost scope
1290        // must always succeed.
1291        self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1292    }
1293
1294    /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1295    /// some ancestor scope instead of the current scope.
1296    /// It is possible to unwind to some ancestor scope if some drop panics as
1297    /// the program breaks out of a if-then scope.
1298    fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1299        let target = self.scopes.scope_index(target_scope, span);
1300        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1301            .iter()
1302            .enumerate()
1303            .rev()
1304            .find_map(|(scope_idx, scope)| {
1305                scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1306            })
1307            .unwrap_or((0, ROOT_NODE));
1308
1309        if uncached_scope > target {
1310            return cached_drop;
1311        }
1312
1313        let is_coroutine = self.coroutine.is_some();
1314        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1315            for drop in &scope.drops {
1316                if is_coroutine || drop.kind == DropKind::Value {
1317                    cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1318                }
1319            }
1320            scope.cached_unwind_block = Some(cached_drop);
1321        }
1322
1323        cached_drop
1324    }
1325
1326    /// Prepares to create a path that performs all required cleanup for a
1327    /// terminator that can unwind at the given basic block.
1328    ///
1329    /// This path terminates in Resume. The path isn't created until after all
1330    /// of the non-unwind paths in this item have been lowered.
1331    pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1332        debug_assert!(
1333            matches!(
1334                self.cfg.block_data(start).terminator().kind,
1335                TerminatorKind::Assert { .. }
1336                    | TerminatorKind::Call { .. }
1337                    | TerminatorKind::Drop { .. }
1338                    | TerminatorKind::FalseUnwind { .. }
1339                    | TerminatorKind::InlineAsm { .. }
1340            ),
1341            "diverge_from called on block with terminator that cannot unwind."
1342        );
1343
1344        let next_drop = self.diverge_cleanup();
1345        self.scopes.unwind_drops.add_entry_point(start, next_drop);
1346    }
1347
1348    /// Returns the [DropIdx] for the innermost drop for dropline (coroutine drop path).
1349    /// The `DropIdx` will be created if it doesn't already exist.
1350    fn diverge_dropline(&mut self) -> DropIdx {
1351        // It is okay to use dummy span because the getting scope index on the topmost scope
1352        // must always succeed.
1353        self.diverge_dropline_target(self.scopes.topmost(), DUMMY_SP)
1354    }
1355
1356    /// Similar to diverge_cleanup_target, but for dropline (coroutine drop path)
1357    fn diverge_dropline_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1358        debug_assert!(
1359            self.coroutine.is_some(),
1360            "diverge_dropline_target is valid only for coroutine"
1361        );
1362        let target = self.scopes.scope_index(target_scope, span);
1363        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1364            .iter()
1365            .enumerate()
1366            .rev()
1367            .find_map(|(scope_idx, scope)| {
1368                scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1369            })
1370            .unwrap_or((0, ROOT_NODE));
1371
1372        if uncached_scope > target {
1373            return cached_drop;
1374        }
1375
1376        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1377            for drop in &scope.drops {
1378                cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
1379            }
1380            scope.cached_coroutine_drop_block = Some(cached_drop);
1381        }
1382
1383        cached_drop
1384    }
1385
1386    /// Sets up a path that performs all required cleanup for dropping a
1387    /// coroutine, starting from the given block that ends in
1388    /// [TerminatorKind::Yield].
1389    ///
1390    /// This path terminates in CoroutineDrop.
1391    pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
1392        debug_assert!(
1393            matches!(
1394                self.cfg.block_data(yield_block).terminator().kind,
1395                TerminatorKind::Yield { .. }
1396            ),
1397            "coroutine_drop_cleanup called on block with non-yield terminator."
1398        );
1399        let cached_drop = self.diverge_dropline();
1400        self.scopes.coroutine_drops.add_entry_point(yield_block, cached_drop);
1401    }
1402
1403    /// Utility function for *non*-scope code to build their own drops
1404    /// Force a drop at this point in the MIR by creating a new block.
1405    pub(crate) fn build_drop_and_replace(
1406        &mut self,
1407        block: BasicBlock,
1408        span: Span,
1409        place: Place<'tcx>,
1410        value: Rvalue<'tcx>,
1411    ) -> BlockAnd<()> {
1412        let source_info = self.source_info(span);
1413
1414        // create the new block for the assignment
1415        let assign = self.cfg.start_new_block();
1416        self.cfg.push_assign(assign, source_info, place, value.clone());
1417
1418        // create the new block for the assignment in the case of unwinding
1419        let assign_unwind = self.cfg.start_new_cleanup_block();
1420        self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1421
1422        self.cfg.terminate(
1423            block,
1424            source_info,
1425            TerminatorKind::Drop {
1426                place,
1427                target: assign,
1428                unwind: UnwindAction::Cleanup(assign_unwind),
1429                replace: true,
1430                drop: None,
1431                async_fut: None,
1432            },
1433        );
1434        self.diverge_from(block);
1435
1436        assign.unit()
1437    }
1438
1439    /// Creates an `Assert` terminator and return the success block.
1440    /// If the boolean condition operand is not the expected value,
1441    /// a runtime panic will be caused with the given message.
1442    pub(crate) fn assert(
1443        &mut self,
1444        block: BasicBlock,
1445        cond: Operand<'tcx>,
1446        expected: bool,
1447        msg: AssertMessage<'tcx>,
1448        span: Span,
1449    ) -> BasicBlock {
1450        let source_info = self.source_info(span);
1451        let success_block = self.cfg.start_new_block();
1452
1453        self.cfg.terminate(
1454            block,
1455            source_info,
1456            TerminatorKind::Assert {
1457                cond,
1458                expected,
1459                msg: Box::new(msg),
1460                target: success_block,
1461                unwind: UnwindAction::Continue,
1462            },
1463        );
1464        self.diverge_from(block);
1465
1466        success_block
1467    }
1468
1469    /// Unschedules any drops in the top scope.
1470    ///
1471    /// This is only needed for `match` arm scopes, because they have one
1472    /// entrance per pattern, but only one exit.
1473    pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1474        let top_scope = self.scopes.scopes.last_mut().unwrap();
1475
1476        assert_eq!(top_scope.region_scope, region_scope);
1477
1478        top_scope.drops.clear();
1479        top_scope.invalidate_cache();
1480    }
1481}
1482
1483/// Builds drops for `pop_scope` and `leave_top_scope`.
1484///
1485/// # Parameters
1486///
1487/// * `unwind_drops`, the drop tree data structure storing what needs to be cleaned up if unwind occurs
1488/// * `scope`, describes the drops that will occur on exiting the scope in regular execution
1489/// * `block`, the block to branch to once drops are complete (assuming no unwind occurs)
1490/// * `unwind_to`, describes the drops that would occur at this point in the code if a
1491///   panic occurred (a subset of the drops in `scope`, since we sometimes elide StorageDead and other
1492///   instructions on unwinding)
1493/// * `dropline_to`, describes the drops that would occur at this point in the code if a
1494///    coroutine drop occurred.
1495/// * `storage_dead_on_unwind`, if true, then we should emit `StorageDead` even when unwinding
1496/// * `arg_count`, number of MIR local variables corresponding to fn arguments (used to assert that we don't drop those)
1497fn build_scope_drops<'tcx, F>(
1498    cfg: &mut CFG<'tcx>,
1499    unwind_drops: &mut DropTree,
1500    coroutine_drops: &mut DropTree,
1501    scope: &Scope,
1502    block: BasicBlock,
1503    unwind_to: DropIdx,
1504    dropline_to: Option<DropIdx>,
1505    storage_dead_on_unwind: bool,
1506    arg_count: usize,
1507    is_async_drop: F,
1508) -> BlockAnd<()>
1509where
1510    F: Fn(Local) -> bool,
1511{
1512    debug!("build_scope_drops({:?} -> {:?}), dropline_to={:?}", block, scope, dropline_to);
1513
1514    // Build up the drops in evaluation order. The end result will
1515    // look like:
1516    //
1517    // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1518    //               |                    |                 |
1519    //               :                    |                 |
1520    //                                    V                 V
1521    // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1522    //
1523    // The horizontal arrows represent the execution path when the drops return
1524    // successfully. The downwards arrows represent the execution path when the
1525    // drops panic (panicking while unwinding will abort, so there's no need for
1526    // another set of arrows).
1527    //
1528    // For coroutines, we unwind from a drop on a local to its StorageDead
1529    // statement. For other functions we don't worry about StorageDead. The
1530    // drops for the unwind path should have already been generated by
1531    // `diverge_cleanup_gen`.
1532
1533    // `unwind_to` indicates what needs to be dropped should unwinding occur.
1534    // This is a subset of what needs to be dropped when exiting the scope.
1535    // As we unwind the scope, we will also move `unwind_to` backwards to match,
1536    // so that we can use it should a destructor panic.
1537    let mut unwind_to = unwind_to;
1538
1539    // The block that we should jump to after drops complete. We start by building the final drop (`drops[n]`
1540    // in the diagram above) and then build the drops (e.g., `drop[1]`, `drop[0]`) that come before it.
1541    // block begins as the successor of `drops[n]` and then becomes `drops[n]` so that `drops[n-1]`
1542    // will branch to `drops[n]`.
1543    let mut block = block;
1544
1545    // `dropline_to` indicates what needs to be dropped should coroutine drop occur.
1546    let mut dropline_to = dropline_to;
1547
1548    for drop_data in scope.drops.iter().rev() {
1549        let source_info = drop_data.source_info;
1550        let local = drop_data.local;
1551
1552        match drop_data.kind {
1553            DropKind::Value => {
1554                // `unwind_to` should drop the value that we're about to
1555                // schedule. If dropping this value panics, then we continue
1556                // with the *next* value on the unwind path.
1557                //
1558                // We adjust this BEFORE we create the drop (e.g., `drops[n]`)
1559                // because `drops[n]` should unwind to `drops[n-1]`.
1560                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.local, drop_data.local);
1561                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1562                unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1563
1564                if let Some(idx) = dropline_to {
1565                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1566                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1567                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1568                }
1569
1570                // If the operand has been moved, and we are not on an unwind
1571                // path, then don't generate the drop. (We only take this into
1572                // account for non-unwind paths so as not to disturb the
1573                // caching mechanism.)
1574                if scope.moved_locals.contains(&local) {
1575                    continue;
1576                }
1577
1578                unwind_drops.add_entry_point(block, unwind_to);
1579                if let Some(to) = dropline_to
1580                    && is_async_drop(local)
1581                {
1582                    coroutine_drops.add_entry_point(block, to);
1583                }
1584
1585                let next = cfg.start_new_block();
1586                cfg.terminate(
1587                    block,
1588                    source_info,
1589                    TerminatorKind::Drop {
1590                        place: local.into(),
1591                        target: next,
1592                        unwind: UnwindAction::Continue,
1593                        replace: false,
1594                        drop: None,
1595                        async_fut: None,
1596                    },
1597                );
1598                block = next;
1599            }
1600            DropKind::ForLint => {
1601                // As in the `DropKind::Storage` case below:
1602                // normally lint-related drops are not emitted for unwind,
1603                // so we can just leave `unwind_to` unmodified, but in some
1604                // cases we emit things ALSO on the unwind path, so we need to adjust
1605                // `unwind_to` in that case.
1606                if storage_dead_on_unwind {
1607                    debug_assert_eq!(
1608                        unwind_drops.drop_nodes[unwind_to].data.local,
1609                        drop_data.local
1610                    );
1611                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1612                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1613                }
1614
1615                // If the operand has been moved, and we are not on an unwind
1616                // path, then don't generate the drop. (We only take this into
1617                // account for non-unwind paths so as not to disturb the
1618                // caching mechanism.)
1619                if scope.moved_locals.contains(&local) {
1620                    continue;
1621                }
1622
1623                cfg.push(
1624                    block,
1625                    Statement {
1626                        source_info,
1627                        kind: StatementKind::BackwardIncompatibleDropHint {
1628                            place: Box::new(local.into()),
1629                            reason: BackwardIncompatibleDropReason::Edition2024,
1630                        },
1631                    },
1632                );
1633            }
1634            DropKind::Storage => {
1635                // Ordinarily, storage-dead nodes are not emitted on unwind, so we don't
1636                // need to adjust `unwind_to` on this path. However, in some specific cases
1637                // we *do* emit storage-dead nodes on the unwind path, and in that case now that
1638                // the storage-dead has completed, we need to adjust the `unwind_to` pointer
1639                // so that any future drops we emit will not register storage-dead.
1640                if storage_dead_on_unwind {
1641                    debug_assert_eq!(
1642                        unwind_drops.drop_nodes[unwind_to].data.local,
1643                        drop_data.local
1644                    );
1645                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1646                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1647                }
1648                if let Some(idx) = dropline_to {
1649                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1650                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1651                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1652                }
1653                // Only temps and vars need their storage dead.
1654                assert!(local.index() > arg_count);
1655                cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1656            }
1657        }
1658    }
1659    block.unit()
1660}
1661
1662impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1663    /// Build a drop tree for a breakable scope.
1664    ///
1665    /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1666    /// loop. Otherwise this is for `break` or `return`.
1667    fn build_exit_tree(
1668        &mut self,
1669        mut drops: DropTree,
1670        else_scope: region::Scope,
1671        span: Span,
1672        continue_block: Option<BasicBlock>,
1673    ) -> Option<BlockAnd<()>> {
1674        let blocks = drops.build_mir::<ExitScopes>(&mut self.cfg, continue_block);
1675        let is_coroutine = self.coroutine.is_some();
1676
1677        // Link the exit drop tree to unwind drop tree.
1678        if drops.drop_nodes.iter().any(|drop_node| drop_node.data.kind == DropKind::Value) {
1679            let unwind_target = self.diverge_cleanup_target(else_scope, span);
1680            let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1681            for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated().skip(1) {
1682                match drop_node.data.kind {
1683                    DropKind::Storage | DropKind::ForLint => {
1684                        if is_coroutine {
1685                            let unwind_drop = self
1686                                .scopes
1687                                .unwind_drops
1688                                .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1689                            unwind_indices.push(unwind_drop);
1690                        } else {
1691                            unwind_indices.push(unwind_indices[drop_node.next]);
1692                        }
1693                    }
1694                    DropKind::Value => {
1695                        let unwind_drop = self
1696                            .scopes
1697                            .unwind_drops
1698                            .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1699                        self.scopes.unwind_drops.add_entry_point(
1700                            blocks[drop_idx].unwrap(),
1701                            unwind_indices[drop_node.next],
1702                        );
1703                        unwind_indices.push(unwind_drop);
1704                    }
1705                }
1706            }
1707        }
1708        // Link the exit drop tree to dropline drop tree (coroutine drop path) for async drops
1709        if is_coroutine
1710            && drops.drop_nodes.iter().any(|DropNode { data, next: _ }| {
1711                data.kind == DropKind::Value && self.is_async_drop(data.local)
1712            })
1713        {
1714            let dropline_target = self.diverge_dropline_target(else_scope, span);
1715            let mut dropline_indices = IndexVec::from_elem_n(dropline_target, 1);
1716            for (drop_idx, drop_data) in drops.drop_nodes.iter_enumerated().skip(1) {
1717                let coroutine_drop = self
1718                    .scopes
1719                    .coroutine_drops
1720                    .add_drop(drop_data.data, dropline_indices[drop_data.next]);
1721                match drop_data.data.kind {
1722                    DropKind::Storage | DropKind::ForLint => {}
1723                    DropKind::Value => {
1724                        if self.is_async_drop(drop_data.data.local) {
1725                            self.scopes.coroutine_drops.add_entry_point(
1726                                blocks[drop_idx].unwrap(),
1727                                dropline_indices[drop_data.next],
1728                            );
1729                        }
1730                    }
1731                }
1732                dropline_indices.push(coroutine_drop);
1733            }
1734        }
1735        blocks[ROOT_NODE].map(BasicBlock::unit)
1736    }
1737
1738    /// Build the unwind and coroutine drop trees.
1739    pub(crate) fn build_drop_trees(&mut self) {
1740        if self.coroutine.is_some() {
1741            self.build_coroutine_drop_trees();
1742        } else {
1743            Self::build_unwind_tree(
1744                &mut self.cfg,
1745                &mut self.scopes.unwind_drops,
1746                self.fn_span,
1747                &mut None,
1748            );
1749        }
1750    }
1751
1752    fn build_coroutine_drop_trees(&mut self) {
1753        // Build the drop tree for dropping the coroutine while it's suspended.
1754        let drops = &mut self.scopes.coroutine_drops;
1755        let cfg = &mut self.cfg;
1756        let fn_span = self.fn_span;
1757        let blocks = drops.build_mir::<CoroutineDrop>(cfg, None);
1758        if let Some(root_block) = blocks[ROOT_NODE] {
1759            cfg.terminate(
1760                root_block,
1761                SourceInfo::outermost(fn_span),
1762                TerminatorKind::CoroutineDrop,
1763            );
1764        }
1765
1766        // Build the drop tree for unwinding in the normal control flow paths.
1767        let resume_block = &mut None;
1768        let unwind_drops = &mut self.scopes.unwind_drops;
1769        Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
1770
1771        // Build the drop tree for unwinding when dropping a suspended
1772        // coroutine.
1773        //
1774        // This is a different tree to the standard unwind paths here to
1775        // prevent drop elaboration from creating drop flags that would have
1776        // to be captured by the coroutine. I'm not sure how important this
1777        // optimization is, but it is here.
1778        for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated() {
1779            if let DropKind::Value = drop_node.data.kind
1780                && let Some(bb) = blocks[drop_idx]
1781            {
1782                debug_assert!(drop_node.next < drops.drop_nodes.next_index());
1783                drops.entry_points.push((drop_node.next, bb));
1784            }
1785        }
1786        Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
1787    }
1788
1789    fn build_unwind_tree(
1790        cfg: &mut CFG<'tcx>,
1791        drops: &mut DropTree,
1792        fn_span: Span,
1793        resume_block: &mut Option<BasicBlock>,
1794    ) {
1795        let blocks = drops.build_mir::<Unwind>(cfg, *resume_block);
1796        if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1797            cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
1798
1799            *resume_block = blocks[ROOT_NODE];
1800        }
1801    }
1802}
1803
1804// DropTreeBuilder implementations.
1805
1806struct ExitScopes;
1807
1808impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1809    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1810        cfg.start_new_block()
1811    }
1812    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1813        // There should be an existing terminator with real source info and a
1814        // dummy TerminatorKind. Replace it with a proper goto.
1815        // (The dummy is added by `break_scope` and `break_for_else`.)
1816        let term = cfg.block_data_mut(from).terminator_mut();
1817        if let TerminatorKind::UnwindResume = term.kind {
1818            term.kind = TerminatorKind::Goto { target: to };
1819        } else {
1820            span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
1821        }
1822    }
1823}
1824
1825struct CoroutineDrop;
1826
1827impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
1828    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1829        cfg.start_new_block()
1830    }
1831    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1832        let term = cfg.block_data_mut(from).terminator_mut();
1833        if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1834            *drop = Some(to);
1835        } else if let TerminatorKind::Drop { ref mut drop, .. } = term.kind {
1836            *drop = Some(to);
1837        } else {
1838            span_bug!(
1839                term.source_info.span,
1840                "cannot enter coroutine drop tree from {:?}",
1841                term.kind
1842            )
1843        }
1844    }
1845}
1846
1847struct Unwind;
1848
1849impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1850    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1851        cfg.start_new_cleanup_block()
1852    }
1853    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1854        let term = &mut cfg.block_data_mut(from).terminator_mut();
1855        match &mut term.kind {
1856            TerminatorKind::Drop { unwind, .. } => {
1857                if let UnwindAction::Cleanup(unwind) = *unwind {
1858                    let source_info = term.source_info;
1859                    cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
1860                } else {
1861                    *unwind = UnwindAction::Cleanup(to);
1862                }
1863            }
1864            TerminatorKind::FalseUnwind { unwind, .. }
1865            | TerminatorKind::Call { unwind, .. }
1866            | TerminatorKind::Assert { unwind, .. }
1867            | TerminatorKind::InlineAsm { unwind, .. } => {
1868                *unwind = UnwindAction::Cleanup(to);
1869            }
1870            TerminatorKind::Goto { .. }
1871            | TerminatorKind::SwitchInt { .. }
1872            | TerminatorKind::UnwindResume
1873            | TerminatorKind::UnwindTerminate(_)
1874            | TerminatorKind::Return
1875            | TerminatorKind::TailCall { .. }
1876            | TerminatorKind::Unreachable
1877            | TerminatorKind::Yield { .. }
1878            | TerminatorKind::CoroutineDrop
1879            | TerminatorKind::FalseEdge { .. } => {
1880                span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1881            }
1882        }
1883    }
1884}