rustc_data_structures/
profiling.rs

1//! # Rust Compiler Self-Profiling
2//!
3//! This module implements the basic framework for the compiler's self-
4//! profiling support. It provides the `SelfProfiler` type which enables
5//! recording "events". An event is something that starts and ends at a given
6//! point in time and has an ID and a kind attached to it. This allows for
7//! tracing the compiler's activity.
8//!
9//! Internally this module uses the custom tailored [measureme][mm] crate for
10//! efficiently recording events to disk in a compact format that can be
11//! post-processed and analyzed by the suite of tools in the `measureme`
12//! project. The highest priority for the tracing framework is on incurring as
13//! little overhead as possible.
14//!
15//!
16//! ## Event Overview
17//!
18//! Events have a few properties:
19//!
20//! - The `event_kind` designates the broad category of an event (e.g. does it
21//!   correspond to the execution of a query provider or to loading something
22//!   from the incr. comp. on-disk cache, etc).
23//! - The `event_id` designates the query invocation or function call it
24//!   corresponds to, possibly including the query key or function arguments.
25//! - Each event stores the ID of the thread it was recorded on.
26//! - The timestamp stores beginning and end of the event, or the single point
27//!   in time it occurred at for "instant" events.
28//!
29//!
30//! ## Event Filtering
31//!
32//! Event generation can be filtered by event kind. Recording all possible
33//! events generates a lot of data, much of which is not needed for most kinds
34//! of analysis. So, in order to keep overhead as low as possible for a given
35//! use case, the `SelfProfiler` will only record the kinds of events that
36//! pass the filter specified as a command line argument to the compiler.
37//!
38//!
39//! ## `event_id` Assignment
40//!
41//! As far as `measureme` is concerned, `event_id`s are just strings. However,
42//! it would incur too much overhead to generate and persist each `event_id`
43//! string at the point where the event is recorded. In order to make this more
44//! efficient `measureme` has two features:
45//!
46//! - Strings can share their content, so that re-occurring parts don't have to
47//!   be copied over and over again. One allocates a string in `measureme` and
48//!   gets back a `StringId`. This `StringId` is then used to refer to that
49//!   string. `measureme` strings are actually DAGs of string components so that
50//!   arbitrary sharing of substrings can be done efficiently. This is useful
51//!   because `event_id`s contain lots of redundant text like query names or
52//!   def-path components.
53//!
54//! - `StringId`s can be "virtual" which means that the client picks a numeric
55//!   ID according to some application-specific scheme and can later make that
56//!   ID be mapped to an actual string. This is used to cheaply generate
57//!   `event_id`s while the events actually occur, causing little timing
58//!   distortion, and then later map those `StringId`s, in bulk, to actual
59//!   `event_id` strings. This way the largest part of the tracing overhead is
60//!   localized to one contiguous chunk of time.
61//!
62//! How are these `event_id`s generated in the compiler? For things that occur
63//! infrequently (e.g. "generic activities"), we just allocate the string the
64//! first time it is used and then keep the `StringId` in a hash table. This
65//! is implemented in `SelfProfiler::get_or_alloc_cached_string()`.
66//!
67//! For queries it gets more interesting: First we need a unique numeric ID for
68//! each query invocation (the `QueryInvocationId`). This ID is used as the
69//! virtual `StringId` we use as `event_id` for a given event. This ID has to
70//! be available both when the query is executed and later, together with the
71//! query key, when we allocate the actual `event_id` strings in bulk.
72//!
73//! We could make the compiler generate and keep track of such an ID for each
74//! query invocation but luckily we already have something that fits all the
75//! the requirements: the query's `DepNodeIndex`. So we use the numeric value
76//! of the `DepNodeIndex` as `event_id` when recording the event and then,
77//! just before the query context is dropped, we walk the entire query cache
78//! (which stores the `DepNodeIndex` along with the query key for each
79//! invocation) and allocate the corresponding strings together with a mapping
80//! for `DepNodeIndex as StringId`.
81//!
82//! [mm]: https://github.com/rust-lang/measureme/
83
84use std::borrow::Borrow;
85use std::collections::hash_map::Entry;
86use std::error::Error;
87use std::fmt::Display;
88use std::intrinsics::unlikely;
89use std::path::Path;
90use std::sync::Arc;
91use std::sync::atomic::Ordering;
92use std::time::{Duration, Instant};
93use std::{fs, process};
94
95pub use measureme::EventId;
96use measureme::{EventIdBuilder, Profiler, SerializableString, StringId};
97use parking_lot::RwLock;
98use smallvec::SmallVec;
99use tracing::warn;
100
101use crate::fx::FxHashMap;
102use crate::outline;
103use crate::sync::AtomicU64;
104
105bitflags::bitflags! {
106    #[derive(Clone, Copy)]
107    struct EventFilter: u16 {
108        const GENERIC_ACTIVITIES  = 1 << 0;
109        const QUERY_PROVIDERS     = 1 << 1;
110        /// Store detailed instant events, including timestamp and thread ID,
111        /// per each query cache hit. Note that this is quite expensive.
112        const QUERY_CACHE_HITS    = 1 << 2;
113        const QUERY_BLOCKED       = 1 << 3;
114        const INCR_CACHE_LOADS    = 1 << 4;
115
116        const QUERY_KEYS          = 1 << 5;
117        const FUNCTION_ARGS       = 1 << 6;
118        const LLVM                = 1 << 7;
119        const INCR_RESULT_HASHING = 1 << 8;
120        const ARTIFACT_SIZES      = 1 << 9;
121        /// Store aggregated counts of cache hits per query invocation.
122        const QUERY_CACHE_HIT_COUNTS  = 1 << 10;
123
124        const DEFAULT = Self::GENERIC_ACTIVITIES.bits() |
125                        Self::QUERY_PROVIDERS.bits() |
126                        Self::QUERY_BLOCKED.bits() |
127                        Self::INCR_CACHE_LOADS.bits() |
128                        Self::INCR_RESULT_HASHING.bits() |
129                        Self::ARTIFACT_SIZES.bits() |
130                        Self::QUERY_CACHE_HIT_COUNTS.bits();
131
132        const ARGS = Self::QUERY_KEYS.bits() | Self::FUNCTION_ARGS.bits();
133        const QUERY_CACHE_HIT_COMBINED = Self::QUERY_CACHE_HITS.bits() | Self::QUERY_CACHE_HIT_COUNTS.bits();
134    }
135}
136
137// keep this in sync with the `-Z self-profile-events` help message in rustc_session/options.rs
138const EVENT_FILTERS_BY_NAME: &[(&str, EventFilter)] = &[
139    ("none", EventFilter::empty()),
140    ("all", EventFilter::all()),
141    ("default", EventFilter::DEFAULT),
142    ("generic-activity", EventFilter::GENERIC_ACTIVITIES),
143    ("query-provider", EventFilter::QUERY_PROVIDERS),
144    ("query-cache-hit", EventFilter::QUERY_CACHE_HITS),
145    ("query-cache-hit-count", EventFilter::QUERY_CACHE_HIT_COUNTS),
146    ("query-blocked", EventFilter::QUERY_BLOCKED),
147    ("incr-cache-load", EventFilter::INCR_CACHE_LOADS),
148    ("query-keys", EventFilter::QUERY_KEYS),
149    ("function-args", EventFilter::FUNCTION_ARGS),
150    ("args", EventFilter::ARGS),
151    ("llvm", EventFilter::LLVM),
152    ("incr-result-hashing", EventFilter::INCR_RESULT_HASHING),
153    ("artifact-sizes", EventFilter::ARTIFACT_SIZES),
154];
155
156/// Something that uniquely identifies a query invocation.
157pub struct QueryInvocationId(pub u32);
158
159/// Which format to use for `-Z time-passes`
160#[derive(Clone, Copy, PartialEq, Hash, Debug)]
161pub enum TimePassesFormat {
162    /// Emit human readable text
163    Text,
164    /// Emit structured JSON
165    Json,
166}
167
168/// A reference to the SelfProfiler. It can be cloned and sent across thread
169/// boundaries at will.
170#[derive(Clone)]
171pub struct SelfProfilerRef {
172    // This field is `None` if self-profiling is disabled for the current
173    // compilation session.
174    profiler: Option<Arc<SelfProfiler>>,
175
176    // We store the filter mask directly in the reference because that doesn't
177    // cost anything and allows for filtering with checking if the profiler is
178    // actually enabled.
179    event_filter_mask: EventFilter,
180
181    // Print verbose generic activities to stderr.
182    print_verbose_generic_activities: Option<TimePassesFormat>,
183}
184
185impl SelfProfilerRef {
186    pub fn new(
187        profiler: Option<Arc<SelfProfiler>>,
188        print_verbose_generic_activities: Option<TimePassesFormat>,
189    ) -> SelfProfilerRef {
190        // If there is no SelfProfiler then the filter mask is set to NONE,
191        // ensuring that nothing ever tries to actually access it.
192        let event_filter_mask =
193            profiler.as_ref().map_or(EventFilter::empty(), |p| p.event_filter_mask);
194
195        SelfProfilerRef { profiler, event_filter_mask, print_verbose_generic_activities }
196    }
197
198    /// This shim makes sure that calls only get executed if the filter mask
199    /// lets them pass. It also contains some trickery to make sure that
200    /// code is optimized for non-profiling compilation sessions, i.e. anything
201    /// past the filter check is never inlined so it doesn't clutter the fast
202    /// path.
203    #[inline(always)]
204    fn exec<F>(&self, event_filter: EventFilter, f: F) -> TimingGuard<'_>
205    where
206        F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
207    {
208        #[inline(never)]
209        #[cold]
210        fn cold_call<F>(profiler_ref: &SelfProfilerRef, f: F) -> TimingGuard<'_>
211        where
212            F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
213        {
214            let profiler = profiler_ref.profiler.as_ref().unwrap();
215            f(profiler)
216        }
217
218        if self.event_filter_mask.contains(event_filter) {
219            cold_call(self, f)
220        } else {
221            TimingGuard::none()
222        }
223    }
224
225    /// Start profiling a verbose generic activity. Profiling continues until the
226    /// VerboseTimingGuard returned from this call is dropped. In addition to recording
227    /// a measureme event, "verbose" generic activities also print a timing entry to
228    /// stderr if the compiler is invoked with -Ztime-passes.
229    pub fn verbose_generic_activity(&self, event_label: &'static str) -> VerboseTimingGuard<'_> {
230        let message_and_format =
231            self.print_verbose_generic_activities.map(|format| (event_label.to_owned(), format));
232
233        VerboseTimingGuard::start(message_and_format, self.generic_activity(event_label))
234    }
235
236    /// Like `verbose_generic_activity`, but with an extra arg.
237    pub fn verbose_generic_activity_with_arg<A>(
238        &self,
239        event_label: &'static str,
240        event_arg: A,
241    ) -> VerboseTimingGuard<'_>
242    where
243        A: Borrow<str> + Into<String>,
244    {
245        let message_and_format = self
246            .print_verbose_generic_activities
247            .map(|format| (format!("{}({})", event_label, event_arg.borrow()), format));
248
249        VerboseTimingGuard::start(
250            message_and_format,
251            self.generic_activity_with_arg(event_label, event_arg),
252        )
253    }
254
255    /// Start profiling a generic activity. Profiling continues until the
256    /// TimingGuard returned from this call is dropped.
257    #[inline(always)]
258    pub fn generic_activity(&self, event_label: &'static str) -> TimingGuard<'_> {
259        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
260            let event_label = profiler.get_or_alloc_cached_string(event_label);
261            let event_id = EventId::from_label(event_label);
262            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
263        })
264    }
265
266    /// Start profiling with some event filter for a given event. Profiling continues until the
267    /// TimingGuard returned from this call is dropped.
268    #[inline(always)]
269    pub fn generic_activity_with_event_id(&self, event_id: EventId) -> TimingGuard<'_> {
270        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
271            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
272        })
273    }
274
275    /// Start profiling a generic activity. Profiling continues until the
276    /// TimingGuard returned from this call is dropped.
277    #[inline(always)]
278    pub fn generic_activity_with_arg<A>(
279        &self,
280        event_label: &'static str,
281        event_arg: A,
282    ) -> TimingGuard<'_>
283    where
284        A: Borrow<str> + Into<String>,
285    {
286        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
287            let builder = EventIdBuilder::new(&profiler.profiler);
288            let event_label = profiler.get_or_alloc_cached_string(event_label);
289            let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
290                let event_arg = profiler.get_or_alloc_cached_string(event_arg);
291                builder.from_label_and_arg(event_label, event_arg)
292            } else {
293                builder.from_label(event_label)
294            };
295            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
296        })
297    }
298
299    /// Start profiling a generic activity, allowing costly arguments to be recorded. Profiling
300    /// continues until the `TimingGuard` returned from this call is dropped.
301    ///
302    /// If the arguments to a generic activity are cheap to create, use `generic_activity_with_arg`
303    /// or `generic_activity_with_args` for their simpler API. However, if they are costly or
304    /// require allocation in sufficiently hot contexts, then this allows for a closure to be called
305    /// only when arguments were asked to be recorded via `-Z self-profile-events=args`.
306    ///
307    /// In this case, the closure will be passed a `&mut EventArgRecorder`, to help with recording
308    /// one or many arguments within the generic activity being profiled, by calling its
309    /// `record_arg` method for example.
310    ///
311    /// This `EventArgRecorder` may implement more specific traits from other rustc crates, e.g. for
312    /// richer handling of rustc-specific argument types, while keeping this single entry-point API
313    /// for recording arguments.
314    ///
315    /// Note: recording at least one argument is *required* for the self-profiler to create the
316    /// `TimingGuard`. A panic will be triggered if that doesn't happen. This function exists
317    /// explicitly to record arguments, so it fails loudly when there are none to record.
318    ///
319    #[inline(always)]
320    pub fn generic_activity_with_arg_recorder<F>(
321        &self,
322        event_label: &'static str,
323        mut f: F,
324    ) -> TimingGuard<'_>
325    where
326        F: FnMut(&mut EventArgRecorder<'_>),
327    {
328        // Ensure this event will only be recorded when self-profiling is turned on.
329        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
330            let builder = EventIdBuilder::new(&profiler.profiler);
331            let event_label = profiler.get_or_alloc_cached_string(event_label);
332
333            // Ensure the closure to create event arguments will only be called when argument
334            // recording is turned on.
335            let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
336                // Set up the builder and call the user-provided closure to record potentially
337                // costly event arguments.
338                let mut recorder = EventArgRecorder { profiler, args: SmallVec::new() };
339                f(&mut recorder);
340
341                // It is expected that the closure will record at least one argument. If that
342                // doesn't happen, it's a bug: we've been explicitly called in order to record
343                // arguments, so we fail loudly when there are none to record.
344                if recorder.args.is_empty() {
345                    panic!(
346                        "The closure passed to `generic_activity_with_arg_recorder` needs to \
347                         record at least one argument"
348                    );
349                }
350
351                builder.from_label_and_args(event_label, &recorder.args)
352            } else {
353                builder.from_label(event_label)
354            };
355            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
356        })
357    }
358
359    /// Record the size of an artifact that the compiler produces
360    ///
361    /// `artifact_kind` is the class of artifact (e.g., query_cache, object_file, etc.)
362    /// `artifact_name` is an identifier to the specific artifact being stored (usually a filename)
363    #[inline(always)]
364    pub fn artifact_size<A>(&self, artifact_kind: &str, artifact_name: A, size: u64)
365    where
366        A: Borrow<str> + Into<String>,
367    {
368        drop(self.exec(EventFilter::ARTIFACT_SIZES, |profiler| {
369            let builder = EventIdBuilder::new(&profiler.profiler);
370            let event_label = profiler.get_or_alloc_cached_string(artifact_kind);
371            let event_arg = profiler.get_or_alloc_cached_string(artifact_name);
372            let event_id = builder.from_label_and_arg(event_label, event_arg);
373            let thread_id = get_thread_id();
374
375            profiler.profiler.record_integer_event(
376                profiler.artifact_size_event_kind,
377                event_id,
378                thread_id,
379                size,
380            );
381
382            TimingGuard::none()
383        }))
384    }
385
386    #[inline(always)]
387    pub fn generic_activity_with_args(
388        &self,
389        event_label: &'static str,
390        event_args: &[String],
391    ) -> TimingGuard<'_> {
392        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
393            let builder = EventIdBuilder::new(&profiler.profiler);
394            let event_label = profiler.get_or_alloc_cached_string(event_label);
395            let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
396                let event_args: Vec<_> = event_args
397                    .iter()
398                    .map(|s| profiler.get_or_alloc_cached_string(&s[..]))
399                    .collect();
400                builder.from_label_and_args(event_label, &event_args)
401            } else {
402                builder.from_label(event_label)
403            };
404            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
405        })
406    }
407
408    /// Start profiling a query provider. Profiling continues until the
409    /// TimingGuard returned from this call is dropped.
410    #[inline(always)]
411    pub fn query_provider(&self) -> TimingGuard<'_> {
412        self.exec(EventFilter::QUERY_PROVIDERS, |profiler| {
413            TimingGuard::start(profiler, profiler.query_event_kind, EventId::INVALID)
414        })
415    }
416
417    /// Record a query in-memory cache hit.
418    #[inline(always)]
419    pub fn query_cache_hit(&self, query_invocation_id: QueryInvocationId) {
420        #[inline(never)]
421        #[cold]
422        fn cold_call(profiler_ref: &SelfProfilerRef, query_invocation_id: QueryInvocationId) {
423            if profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
424                profiler_ref
425                    .profiler
426                    .as_ref()
427                    .unwrap()
428                    .increment_query_cache_hit_counters(QueryInvocationId(query_invocation_id.0));
429            }
430            if unlikely(profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HITS)) {
431                profiler_ref.instant_query_event(
432                    |profiler| profiler.query_cache_hit_event_kind,
433                    query_invocation_id,
434                );
435            }
436        }
437
438        // We check both kinds of query cache hit events at once, to reduce overhead in the
439        // common case (with self-profile disabled).
440        if unlikely(self.event_filter_mask.intersects(EventFilter::QUERY_CACHE_HIT_COMBINED)) {
441            cold_call(self, query_invocation_id);
442        }
443    }
444
445    /// Start profiling a query being blocked on a concurrent execution.
446    /// Profiling continues until the TimingGuard returned from this call is
447    /// dropped.
448    #[inline(always)]
449    pub fn query_blocked(&self) -> TimingGuard<'_> {
450        self.exec(EventFilter::QUERY_BLOCKED, |profiler| {
451            TimingGuard::start(profiler, profiler.query_blocked_event_kind, EventId::INVALID)
452        })
453    }
454
455    /// Start profiling how long it takes to load a query result from the
456    /// incremental compilation on-disk cache. Profiling continues until the
457    /// TimingGuard returned from this call is dropped.
458    #[inline(always)]
459    pub fn incr_cache_loading(&self) -> TimingGuard<'_> {
460        self.exec(EventFilter::INCR_CACHE_LOADS, |profiler| {
461            TimingGuard::start(
462                profiler,
463                profiler.incremental_load_result_event_kind,
464                EventId::INVALID,
465            )
466        })
467    }
468
469    /// Start profiling how long it takes to hash query results for incremental compilation.
470    /// Profiling continues until the TimingGuard returned from this call is dropped.
471    #[inline(always)]
472    pub fn incr_result_hashing(&self) -> TimingGuard<'_> {
473        self.exec(EventFilter::INCR_RESULT_HASHING, |profiler| {
474            TimingGuard::start(
475                profiler,
476                profiler.incremental_result_hashing_event_kind,
477                EventId::INVALID,
478            )
479        })
480    }
481
482    #[inline(always)]
483    fn instant_query_event(
484        &self,
485        event_kind: fn(&SelfProfiler) -> StringId,
486        query_invocation_id: QueryInvocationId,
487    ) {
488        let event_id = StringId::new_virtual(query_invocation_id.0);
489        let thread_id = get_thread_id();
490        let profiler = self.profiler.as_ref().unwrap();
491        profiler.profiler.record_instant_event(
492            event_kind(profiler),
493            EventId::from_virtual(event_id),
494            thread_id,
495        );
496    }
497
498    pub fn with_profiler(&self, f: impl FnOnce(&SelfProfiler)) {
499        if let Some(profiler) = &self.profiler {
500            f(profiler)
501        }
502    }
503
504    /// Gets a `StringId` for the given string. This method makes sure that
505    /// any strings going through it will only be allocated once in the
506    /// profiling data.
507    /// Returns `None` if the self-profiling is not enabled.
508    pub fn get_or_alloc_cached_string(&self, s: &str) -> Option<StringId> {
509        self.profiler.as_ref().map(|p| p.get_or_alloc_cached_string(s))
510    }
511
512    /// Store query cache hits to the self-profile log.
513    /// Should be called once at the end of the compilation session.
514    ///
515    /// The cache hits are stored per **query invocation**, not **per query kind/type**.
516    /// `analyzeme` can later deduplicate individual query labels from the QueryInvocationId event
517    /// IDs.
518    pub fn store_query_cache_hits(&self) {
519        if self.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
520            let profiler = self.profiler.as_ref().unwrap();
521            let query_hits = profiler.query_hits.read();
522            let builder = EventIdBuilder::new(&profiler.profiler);
523            let thread_id = get_thread_id();
524            for (query_invocation, hit_count) in query_hits.iter().enumerate() {
525                let hit_count = hit_count.load(Ordering::Relaxed);
526                // No need to record empty cache hit counts
527                if hit_count > 0 {
528                    let event_id =
529                        builder.from_label(StringId::new_virtual(query_invocation as u64));
530                    profiler.profiler.record_integer_event(
531                        profiler.query_cache_hit_count_event_kind,
532                        event_id,
533                        thread_id,
534                        hit_count,
535                    );
536                }
537            }
538        }
539    }
540
541    #[inline]
542    pub fn enabled(&self) -> bool {
543        self.profiler.is_some()
544    }
545
546    #[inline]
547    pub fn llvm_recording_enabled(&self) -> bool {
548        self.event_filter_mask.contains(EventFilter::LLVM)
549    }
550    #[inline]
551    pub fn get_self_profiler(&self) -> Option<Arc<SelfProfiler>> {
552        self.profiler.clone()
553    }
554
555    /// Is expensive recording of query keys and/or function arguments enabled?
556    pub fn is_args_recording_enabled(&self) -> bool {
557        self.enabled() && self.event_filter_mask.intersects(EventFilter::ARGS)
558    }
559}
560
561/// A helper for recording costly arguments to self-profiling events. Used with
562/// `SelfProfilerRef::generic_activity_with_arg_recorder`.
563pub struct EventArgRecorder<'p> {
564    /// The `SelfProfiler` used to intern the event arguments that users will ask to record.
565    profiler: &'p SelfProfiler,
566
567    /// The interned event arguments to be recorded in the generic activity event.
568    ///
569    /// The most common case, when actually recording event arguments, is to have one argument. Then
570    /// followed by recording two, in a couple places.
571    args: SmallVec<[StringId; 2]>,
572}
573
574impl EventArgRecorder<'_> {
575    /// Records a single argument within the current generic activity being profiled.
576    ///
577    /// Note: when self-profiling with costly event arguments, at least one argument
578    /// needs to be recorded. A panic will be triggered if that doesn't happen.
579    pub fn record_arg<A>(&mut self, event_arg: A)
580    where
581        A: Borrow<str> + Into<String>,
582    {
583        let event_arg = self.profiler.get_or_alloc_cached_string(event_arg);
584        self.args.push(event_arg);
585    }
586}
587
588pub struct SelfProfiler {
589    profiler: Profiler,
590    event_filter_mask: EventFilter,
591
592    string_cache: RwLock<FxHashMap<String, StringId>>,
593
594    /// Recording individual query cache hits as "instant" measureme events
595    /// is incredibly expensive. Instead of doing that, we simply aggregate
596    /// cache hit *counts* per query invocation, and then store the final count
597    /// of cache hits per invocation at the end of the compilation session.
598    ///
599    /// With this approach, we don't know the individual thread IDs and timestamps
600    /// of cache hits, but it has very little overhead on top of `-Zself-profile`.
601    /// Recording the cache hits as individual events made compilation 3-5x slower.
602    ///
603    /// Query invocation IDs should be monotonic integers, so we can store them in a vec,
604    /// rather than using a hashmap.
605    query_hits: RwLock<Vec<AtomicU64>>,
606
607    query_event_kind: StringId,
608    generic_activity_event_kind: StringId,
609    incremental_load_result_event_kind: StringId,
610    incremental_result_hashing_event_kind: StringId,
611    query_blocked_event_kind: StringId,
612    query_cache_hit_event_kind: StringId,
613    artifact_size_event_kind: StringId,
614    /// Total cache hits per query invocation
615    query_cache_hit_count_event_kind: StringId,
616}
617
618impl SelfProfiler {
619    pub fn new(
620        output_directory: &Path,
621        crate_name: Option<&str>,
622        event_filters: Option<&[String]>,
623        counter_name: &str,
624    ) -> Result<SelfProfiler, Box<dyn Error + Send + Sync>> {
625        fs::create_dir_all(output_directory)?;
626
627        let crate_name = crate_name.unwrap_or("unknown-crate");
628        // HACK(eddyb) we need to pad the PID, strange as it may seem, as its
629        // length can behave as a source of entropy for heap addresses, when
630        // ASLR is disabled and the heap is otherwise deterministic.
631        let pid: u32 = process::id();
632        let filename = format!("{crate_name}-{pid:07}.rustc_profile");
633        let path = output_directory.join(filename);
634        let profiler =
635            Profiler::with_counter(&path, measureme::counters::Counter::by_name(counter_name)?)?;
636
637        let query_event_kind = profiler.alloc_string("Query");
638        let generic_activity_event_kind = profiler.alloc_string("GenericActivity");
639        let incremental_load_result_event_kind = profiler.alloc_string("IncrementalLoadResult");
640        let incremental_result_hashing_event_kind =
641            profiler.alloc_string("IncrementalResultHashing");
642        let query_blocked_event_kind = profiler.alloc_string("QueryBlocked");
643        let query_cache_hit_event_kind = profiler.alloc_string("QueryCacheHit");
644        let artifact_size_event_kind = profiler.alloc_string("ArtifactSize");
645        let query_cache_hit_count_event_kind = profiler.alloc_string("QueryCacheHitCount");
646
647        let mut event_filter_mask = EventFilter::empty();
648
649        if let Some(event_filters) = event_filters {
650            let mut unknown_events = vec![];
651            for item in event_filters {
652                if let Some(&(_, mask)) =
653                    EVENT_FILTERS_BY_NAME.iter().find(|&(name, _)| name == item)
654                {
655                    event_filter_mask |= mask;
656                } else {
657                    unknown_events.push(item.clone());
658                }
659            }
660
661            // Warn about any unknown event names
662            if !unknown_events.is_empty() {
663                unknown_events.sort();
664                unknown_events.dedup();
665
666                warn!(
667                    "Unknown self-profiler events specified: {}. Available options are: {}.",
668                    unknown_events.join(", "),
669                    EVENT_FILTERS_BY_NAME
670                        .iter()
671                        .map(|&(name, _)| name.to_string())
672                        .collect::<Vec<_>>()
673                        .join(", ")
674                );
675            }
676        } else {
677            event_filter_mask = EventFilter::DEFAULT;
678        }
679
680        Ok(SelfProfiler {
681            profiler,
682            event_filter_mask,
683            string_cache: RwLock::new(FxHashMap::default()),
684            query_event_kind,
685            generic_activity_event_kind,
686            incremental_load_result_event_kind,
687            incremental_result_hashing_event_kind,
688            query_blocked_event_kind,
689            query_cache_hit_event_kind,
690            artifact_size_event_kind,
691            query_cache_hit_count_event_kind,
692            query_hits: Default::default(),
693        })
694    }
695
696    /// Allocates a new string in the profiling data. Does not do any caching
697    /// or deduplication.
698    pub fn alloc_string<STR: SerializableString + ?Sized>(&self, s: &STR) -> StringId {
699        self.profiler.alloc_string(s)
700    }
701
702    /// Store a cache hit of a query invocation
703    pub fn increment_query_cache_hit_counters(&self, id: QueryInvocationId) {
704        // Fast path: assume that the query was already encountered before, and just record
705        // a cache hit.
706        let mut guard = self.query_hits.upgradable_read();
707        let query_hits = &guard;
708        let index = id.0 as usize;
709        if index < query_hits.len() {
710            // We only want to increment the count, no other synchronization is required
711            query_hits[index].fetch_add(1, Ordering::Relaxed);
712        } else {
713            // If not, we need to extend the query hit map to the highest observed ID
714            guard.with_upgraded(|vec| {
715                vec.resize_with(index + 1, || AtomicU64::new(0));
716                vec[index] = AtomicU64::from(1);
717            });
718        }
719    }
720
721    /// Gets a `StringId` for the given string. This method makes sure that
722    /// any strings going through it will only be allocated once in the
723    /// profiling data.
724    pub fn get_or_alloc_cached_string<A>(&self, s: A) -> StringId
725    where
726        A: Borrow<str> + Into<String>,
727    {
728        // Only acquire a read-lock first since we assume that the string is
729        // already present in the common case.
730        {
731            let string_cache = self.string_cache.read();
732
733            if let Some(&id) = string_cache.get(s.borrow()) {
734                return id;
735            }
736        }
737
738        let mut string_cache = self.string_cache.write();
739        // Check if the string has already been added in the small time window
740        // between dropping the read lock and acquiring the write lock.
741        match string_cache.entry(s.into()) {
742            Entry::Occupied(e) => *e.get(),
743            Entry::Vacant(e) => {
744                let string_id = self.profiler.alloc_string(&e.key()[..]);
745                *e.insert(string_id)
746            }
747        }
748    }
749
750    pub fn map_query_invocation_id_to_string(&self, from: QueryInvocationId, to: StringId) {
751        let from = StringId::new_virtual(from.0);
752        self.profiler.map_virtual_to_concrete_string(from, to);
753    }
754
755    pub fn bulk_map_query_invocation_id_to_single_string<I>(&self, from: I, to: StringId)
756    where
757        I: Iterator<Item = QueryInvocationId> + ExactSizeIterator,
758    {
759        let from = from.map(|qid| StringId::new_virtual(qid.0));
760        self.profiler.bulk_map_virtual_to_single_concrete_string(from, to);
761    }
762
763    pub fn query_key_recording_enabled(&self) -> bool {
764        self.event_filter_mask.contains(EventFilter::QUERY_KEYS)
765    }
766
767    pub fn event_id_builder(&self) -> EventIdBuilder<'_> {
768        EventIdBuilder::new(&self.profiler)
769    }
770}
771
772#[must_use]
773pub struct TimingGuard<'a>(Option<measureme::TimingGuard<'a>>);
774
775impl<'a> TimingGuard<'a> {
776    #[inline]
777    pub fn start(
778        profiler: &'a SelfProfiler,
779        event_kind: StringId,
780        event_id: EventId,
781    ) -> TimingGuard<'a> {
782        let thread_id = get_thread_id();
783        let raw_profiler = &profiler.profiler;
784        let timing_guard =
785            raw_profiler.start_recording_interval_event(event_kind, event_id, thread_id);
786        TimingGuard(Some(timing_guard))
787    }
788
789    #[inline]
790    pub fn finish_with_query_invocation_id(self, query_invocation_id: QueryInvocationId) {
791        if let Some(guard) = self.0 {
792            outline(|| {
793                let event_id = StringId::new_virtual(query_invocation_id.0);
794                let event_id = EventId::from_virtual(event_id);
795                guard.finish_with_override_event_id(event_id);
796            });
797        }
798    }
799
800    #[inline]
801    pub fn none() -> TimingGuard<'a> {
802        TimingGuard(None)
803    }
804
805    #[inline(always)]
806    pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
807        let _timer = self;
808        f()
809    }
810}
811
812struct VerboseInfo {
813    start_time: Instant,
814    start_rss: Option<usize>,
815    message: String,
816    format: TimePassesFormat,
817}
818
819#[must_use]
820pub struct VerboseTimingGuard<'a> {
821    info: Option<VerboseInfo>,
822    _guard: TimingGuard<'a>,
823}
824
825impl<'a> VerboseTimingGuard<'a> {
826    pub fn start(
827        message_and_format: Option<(String, TimePassesFormat)>,
828        _guard: TimingGuard<'a>,
829    ) -> Self {
830        VerboseTimingGuard {
831            _guard,
832            info: message_and_format.map(|(message, format)| VerboseInfo {
833                start_time: Instant::now(),
834                start_rss: get_resident_set_size(),
835                message,
836                format,
837            }),
838        }
839    }
840
841    #[inline(always)]
842    pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
843        let _timer = self;
844        f()
845    }
846}
847
848impl Drop for VerboseTimingGuard<'_> {
849    fn drop(&mut self) {
850        if let Some(info) = &self.info {
851            let end_rss = get_resident_set_size();
852            let dur = info.start_time.elapsed();
853            print_time_passes_entry(&info.message, dur, info.start_rss, end_rss, info.format);
854        }
855    }
856}
857
858struct JsonTimePassesEntry<'a> {
859    pass: &'a str,
860    time: f64,
861    start_rss: Option<usize>,
862    end_rss: Option<usize>,
863}
864
865impl Display for JsonTimePassesEntry<'_> {
866    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
867        let Self { pass: what, time, start_rss, end_rss } = self;
868        write!(f, r#"{{"pass":"{what}","time":{time},"rss_start":"#).unwrap();
869        match start_rss {
870            Some(rss) => write!(f, "{rss}")?,
871            None => write!(f, "null")?,
872        }
873        write!(f, r#","rss_end":"#)?;
874        match end_rss {
875            Some(rss) => write!(f, "{rss}")?,
876            None => write!(f, "null")?,
877        }
878        write!(f, "}}")?;
879        Ok(())
880    }
881}
882
883pub fn print_time_passes_entry(
884    what: &str,
885    dur: Duration,
886    start_rss: Option<usize>,
887    end_rss: Option<usize>,
888    format: TimePassesFormat,
889) {
890    match format {
891        TimePassesFormat::Json => {
892            let entry =
893                JsonTimePassesEntry { pass: what, time: dur.as_secs_f64(), start_rss, end_rss };
894
895            eprintln!(r#"time: {entry}"#);
896            return;
897        }
898        TimePassesFormat::Text => (),
899    }
900
901    // Print the pass if its duration is greater than 5 ms, or it changed the
902    // measured RSS.
903    let is_notable = || {
904        if dur.as_millis() > 5 {
905            return true;
906        }
907
908        if let (Some(start_rss), Some(end_rss)) = (start_rss, end_rss) {
909            let change_rss = end_rss.abs_diff(start_rss);
910            if change_rss > 0 {
911                return true;
912            }
913        }
914
915        false
916    };
917    if !is_notable() {
918        return;
919    }
920
921    let rss_to_mb = |rss| (rss as f64 / 1_000_000.0).round() as usize;
922    let rss_change_to_mb = |rss| (rss as f64 / 1_000_000.0).round() as i128;
923
924    let mem_string = match (start_rss, end_rss) {
925        (Some(start_rss), Some(end_rss)) => {
926            let change_rss = end_rss as i128 - start_rss as i128;
927
928            format!(
929                "; rss: {:>4}MB -> {:>4}MB ({:>+5}MB)",
930                rss_to_mb(start_rss),
931                rss_to_mb(end_rss),
932                rss_change_to_mb(change_rss),
933            )
934        }
935        (Some(start_rss), None) => format!("; rss start: {:>4}MB", rss_to_mb(start_rss)),
936        (None, Some(end_rss)) => format!("; rss end: {:>4}MB", rss_to_mb(end_rss)),
937        (None, None) => String::new(),
938    };
939
940    eprintln!("time: {:>7}{}\t{}", duration_to_secs_str(dur), mem_string, what);
941}
942
943// Hack up our own formatting for the duration to make it easier for scripts
944// to parse (always use the same number of decimal places and the same unit).
945pub fn duration_to_secs_str(dur: std::time::Duration) -> String {
946    format!("{:.3}", dur.as_secs_f64())
947}
948
949fn get_thread_id() -> u32 {
950    std::thread::current().id().as_u64().get() as u32
951}
952
953// Memory reporting
954cfg_select! {
955    windows => {
956        pub fn get_resident_set_size() -> Option<usize> {
957            use windows::{
958                Win32::System::ProcessStatus::{K32GetProcessMemoryInfo, PROCESS_MEMORY_COUNTERS},
959                Win32::System::Threading::GetCurrentProcess,
960            };
961
962            let mut pmc = PROCESS_MEMORY_COUNTERS::default();
963            let pmc_size = size_of_val(&pmc);
964            unsafe {
965                K32GetProcessMemoryInfo(
966                    GetCurrentProcess(),
967                    &mut pmc,
968                    pmc_size as u32,
969                )
970            }
971            .ok()
972            .ok()?;
973
974            Some(pmc.WorkingSetSize)
975        }
976    }
977    target_os = "macos" => {
978        pub fn get_resident_set_size() -> Option<usize> {
979            use libc::{c_int, c_void, getpid, proc_pidinfo, proc_taskinfo, PROC_PIDTASKINFO};
980            use std::mem;
981            const PROC_TASKINFO_SIZE: c_int = size_of::<proc_taskinfo>() as c_int;
982
983            unsafe {
984                let mut info: proc_taskinfo = mem::zeroed();
985                let info_ptr = &mut info as *mut proc_taskinfo as *mut c_void;
986                let pid = getpid() as c_int;
987                let ret = proc_pidinfo(pid, PROC_PIDTASKINFO, 0, info_ptr, PROC_TASKINFO_SIZE);
988                if ret == PROC_TASKINFO_SIZE {
989                    Some(info.pti_resident_size as usize)
990                } else {
991                    None
992                }
993            }
994        }
995    }
996    unix => {
997        pub fn get_resident_set_size() -> Option<usize> {
998            let field = 1;
999            let contents = fs::read("/proc/self/statm").ok()?;
1000            let contents = String::from_utf8(contents).ok()?;
1001            let s = contents.split_whitespace().nth(field)?;
1002            let npages = s.parse::<usize>().ok()?;
1003            Some(npages * 4096)
1004        }
1005    }
1006    _ => {
1007        pub fn get_resident_set_size() -> Option<usize> {
1008            None
1009        }
1010    }
1011}
1012
1013#[cfg(test)]
1014mod tests;