tokio/runtime/task/mod.rs
1//! The task module.
2//!
3//! The task module contains the code that manages spawned tasks and provides a
4//! safe API for the rest of the runtime to use. Each task in a runtime is
5//! stored in an `OwnedTasks` or `LocalOwnedTasks` object.
6//!
7//! # Task reference types
8//!
9//! A task is usually referenced by multiple handles, and there are several
10//! types of handles.
11//!
12//! * `OwnedTask` - tasks stored in an `OwnedTasks` or `LocalOwnedTasks` are of this
13//! reference type.
14//!
15//! * `JoinHandle` - each task has a `JoinHandle` that allows access to the output
16//! of the task.
17//!
18//! * `Waker` - every waker for a task has this reference type. There can be any
19//! number of waker references.
20//!
21//! * `Notified` - tracks whether the task is notified.
22//!
23//! * `Unowned` - this task reference type is used for tasks not stored in any
24//! runtime. Mainly used for blocking tasks, but also in tests.
25//!
26//! The task uses a reference count to keep track of how many active references
27//! exist. The `Unowned` reference type takes up two ref-counts. All other
28//! reference types take up a single ref-count.
29//!
30//! Besides the waker type, each task has at most one of each reference type.
31//!
32//! # State
33//!
34//! The task stores its state in an atomic `usize` with various bitfields for the
35//! necessary information. The state has the following bitfields:
36//!
37//! * `RUNNING` - Tracks whether the task is currently being polled or cancelled.
38//! This bit functions as a lock around the task.
39//!
40//! * `COMPLETE` - Is one once the future has fully completed and has been
41//! dropped. Never unset once set. Never set together with RUNNING.
42//!
43//! * `NOTIFIED` - Tracks whether a Notified object currently exists.
44//!
45//! * `CANCELLED` - Is set to one for tasks that should be cancelled as soon as
46//! possible. May take any value for completed tasks.
47//!
48//! * `JOIN_INTEREST` - Is set to one if there exists a `JoinHandle`.
49//!
50//! * `JOIN_WAKER` - Acts as an access control bit for the join handle waker. The
51//! protocol for its usage is described below.
52//!
53//! The rest of the bits are used for the ref-count.
54//!
55//! # Fields in the task
56//!
57//! The task has various fields. This section describes how and when it is safe
58//! to access a field.
59//!
60//! * The state field is accessed with atomic instructions.
61//!
62//! * The `OwnedTask` reference has exclusive access to the `owned` field.
63//!
64//! * The Notified reference has exclusive access to the `queue_next` field.
65//!
66//! * The `owner_id` field can be set as part of construction of the task, but
67//! is otherwise immutable and anyone can access the field immutably without
68//! synchronization.
69//!
70//! * If COMPLETE is one, then the `JoinHandle` has exclusive access to the
71//! stage field. If COMPLETE is zero, then the RUNNING bitfield functions as
72//! a lock for the stage field, and it can be accessed only by the thread
73//! that set RUNNING to one.
74//!
75//! * The waker field may be concurrently accessed by different threads: in one
76//! thread the runtime may complete a task and *read* the waker field to
77//! invoke the waker, and in another thread the task's `JoinHandle` may be
78//! polled, and if the task hasn't yet completed, the `JoinHandle` may *write*
79//! a waker to the waker field. The `JOIN_WAKER` bit ensures safe access by
80//! multiple threads to the waker field using the following rules:
81//!
82//! 1. `JOIN_WAKER` is initialized to zero.
83//!
84//! 2. If `JOIN_WAKER` is zero, then the `JoinHandle` has exclusive (mutable)
85//! access to the waker field.
86//!
87//! 3. If `JOIN_WAKER` is one, then the `JoinHandle` has shared (read-only)
88//! access to the waker field.
89//!
90//! 4. If `JOIN_WAKER` is one and COMPLETE is one, then the runtime has shared
91//! (read-only) access to the waker field.
92//!
93//! 5. If the `JoinHandle` needs to write to the waker field, then the
94//! `JoinHandle` needs to (i) successfully set `JOIN_WAKER` to zero if it is
95//! not already zero to gain exclusive access to the waker field per rule
96//! 2, (ii) write a waker, and (iii) successfully set `JOIN_WAKER` to one.
97//! If the `JoinHandle` unsets `JOIN_WAKER` in the process of being dropped
98//! to clear the waker field, only steps (i) and (ii) are relevant.
99//!
100//! 6. The `JoinHandle` can change `JOIN_WAKER` only if COMPLETE is zero (i.e.
101//! the task hasn't yet completed). The runtime can change `JOIN_WAKER` only
102//! if COMPLETE is one.
103//!
104//! 7. If `JOIN_INTEREST` is zero and COMPLETE is one, then the runtime has
105//! exclusive (mutable) access to the waker field. This might happen if the
106//! `JoinHandle` gets dropped right after the task completes and the runtime
107//! sets the `COMPLETE` bit. In this case the runtime needs the mutable access
108//! to the waker field to drop it.
109//!
110//! Rule 6 implies that the steps (i) or (iii) of rule 5 may fail due to a
111//! race. If step (i) fails, then the attempt to write a waker is aborted. If
112//! step (iii) fails because COMPLETE is set to one by another thread after
113//! step (i), then the waker field is cleared. Once COMPLETE is one (i.e.
114//! task has completed), the `JoinHandle` will not modify `JOIN_WAKER`. After the
115//! runtime sets COMPLETE to one, it invokes the waker if there is one so in this
116//! case when a task completes the `JOIN_WAKER` bit implicates to the runtime
117//! whether it should invoke the waker or not. After the runtime is done with
118//! using the waker during task completion, it unsets the `JOIN_WAKER` bit to give
119//! the `JoinHandle` exclusive access again so that it is able to drop the waker
120//! at a later point.
121//!
122//! All other fields are immutable and can be accessed immutably without
123//! synchronization by anyone.
124//!
125//! # Safety
126//!
127//! This section goes through various situations and explains why the API is
128//! safe in that situation.
129//!
130//! ## Polling or dropping the future
131//!
132//! Any mutable access to the future happens after obtaining a lock by modifying
133//! the RUNNING field, so exclusive access is ensured.
134//!
135//! When the task completes, exclusive access to the output is transferred to
136//! the `JoinHandle`. If the `JoinHandle` is already dropped when the transition to
137//! complete happens, the thread performing that transition retains exclusive
138//! access to the output and should immediately drop it.
139//!
140//! ## Non-Send futures
141//!
142//! If a future is not Send, then it is bound to a `LocalOwnedTasks`. The future
143//! will only ever be polled or dropped given a `LocalNotified` or inside a call
144//! to `LocalOwnedTasks::shutdown_all`. In either case, it is guaranteed that the
145//! future is on the right thread.
146//!
147//! If the task is never removed from the `LocalOwnedTasks`, then it is leaked, so
148//! there is no risk that the task is dropped on some other thread when the last
149//! ref-count drops.
150//!
151//! ## Non-Send output
152//!
153//! When a task completes, the output is placed in the stage of the task. Then,
154//! a transition that sets COMPLETE to true is performed, and the value of
155//! `JOIN_INTEREST` when this transition happens is read.
156//!
157//! If `JOIN_INTEREST` is zero when the transition to COMPLETE happens, then the
158//! output is immediately dropped.
159//!
160//! If `JOIN_INTEREST` is one when the transition to COMPLETE happens, then the
161//! `JoinHandle` is responsible for cleaning up the output. If the output is not
162//! Send, then this happens:
163//!
164//! 1. The output is created on the thread that the future was polled on. Since
165//! only non-Send futures can have non-Send output, the future was polled on
166//! the thread that the future was spawned from.
167//! 2. Since `JoinHandle<Output>` is not Send if Output is not Send, the
168//! `JoinHandle` is also on the thread that the future was spawned from.
169//! 3. Thus, the `JoinHandle` will not move the output across threads when it
170//! takes or drops the output.
171//!
172//! ## Recursive poll/shutdown
173//!
174//! Calling poll from inside a shutdown call or vice-versa is not prevented by
175//! the API exposed by the task module, so this has to be safe. In either case,
176//! the lock in the RUNNING bitfield makes the inner call return immediately. If
177//! the inner call is a `shutdown` call, then the CANCELLED bit is set, and the
178//! poll call will notice it when the poll finishes, and the task is cancelled
179//! at that point.
180
181// Some task infrastructure is here to support `JoinSet`, which is currently
182// unstable. This should be removed once `JoinSet` is stabilized.
183#![cfg_attr(not(tokio_unstable), allow(dead_code))]
184
185mod core;
186use self::core::Cell;
187use self::core::Header;
188
189mod error;
190pub use self::error::JoinError;
191
192mod harness;
193use self::harness::Harness;
194
195mod id;
196#[cfg_attr(not(tokio_unstable), allow(unreachable_pub, unused_imports))]
197pub use id::{id, try_id, Id};
198
199#[cfg(feature = "rt")]
200mod abort;
201mod join;
202
203#[cfg(feature = "rt")]
204pub use self::abort::AbortHandle;
205
206pub use self::join::JoinHandle;
207
208mod list;
209pub(crate) use self::list::{LocalOwnedTasks, OwnedTasks};
210
211mod raw;
212pub(crate) use self::raw::RawTask;
213
214mod state;
215use self::state::State;
216
217mod waker;
218
219pub(crate) use self::spawn_location::SpawnLocation;
220
221cfg_taskdump! {
222 pub(crate) mod trace;
223}
224
225use crate::future::Future;
226use crate::util::linked_list;
227use crate::util::sharded_list;
228
229use crate::runtime::TaskCallback;
230use std::marker::PhantomData;
231use std::panic::Location;
232use std::ptr::NonNull;
233use std::{fmt, mem};
234
235/// An owned handle to the task, tracked by ref count.
236#[repr(transparent)]
237pub(crate) struct Task<S: 'static> {
238 raw: RawTask,
239 _p: PhantomData<S>,
240}
241
242unsafe impl<S> Send for Task<S> {}
243unsafe impl<S> Sync for Task<S> {}
244
245/// A task was notified.
246#[repr(transparent)]
247pub(crate) struct Notified<S: 'static>(Task<S>);
248
249impl<S> Notified<S> {
250 #[cfg(all(tokio_unstable, feature = "rt-multi-thread"))]
251 #[inline]
252 pub(crate) fn task_meta<'task, 'meta>(&'task self) -> crate::runtime::TaskMeta<'meta> {
253 self.0.task_meta()
254 }
255}
256
257// safety: This type cannot be used to touch the task without first verifying
258// that the value is on a thread where it is safe to poll the task.
259unsafe impl<S: Schedule> Send for Notified<S> {}
260unsafe impl<S: Schedule> Sync for Notified<S> {}
261
262/// A non-Send variant of Notified with the invariant that it is on a thread
263/// where it is safe to poll it.
264#[repr(transparent)]
265pub(crate) struct LocalNotified<S: 'static> {
266 task: Task<S>,
267 _not_send: PhantomData<*const ()>,
268}
269
270impl<S> LocalNotified<S> {
271 #[cfg(tokio_unstable)]
272 #[inline]
273 pub(crate) fn task_meta<'task, 'meta>(&'task self) -> crate::runtime::TaskMeta<'meta> {
274 self.task.task_meta()
275 }
276}
277
278/// A task that is not owned by any `OwnedTasks`. Used for blocking tasks.
279/// This type holds two ref-counts.
280pub(crate) struct UnownedTask<S: 'static> {
281 raw: RawTask,
282 _p: PhantomData<S>,
283}
284
285// safety: This type can only be created given a Send task.
286unsafe impl<S> Send for UnownedTask<S> {}
287unsafe impl<S> Sync for UnownedTask<S> {}
288
289/// Task result sent back.
290pub(crate) type Result<T> = std::result::Result<T, JoinError>;
291
292/// Hooks for scheduling tasks which are needed in the task harness.
293#[derive(Clone)]
294pub(crate) struct TaskHarnessScheduleHooks {
295 pub(crate) task_terminate_callback: Option<TaskCallback>,
296}
297
298pub(crate) trait Schedule: Sync + Sized + 'static {
299 /// The task has completed work and is ready to be released. The scheduler
300 /// should release it immediately and return it. The task module will batch
301 /// the ref-dec with setting other options.
302 ///
303 /// If the scheduler has already released the task, then None is returned.
304 fn release(&self, task: &Task<Self>) -> Option<Task<Self>>;
305
306 /// Schedule the task
307 fn schedule(&self, task: Notified<Self>);
308
309 fn hooks(&self) -> TaskHarnessScheduleHooks;
310
311 /// Schedule the task to run in the near future, yielding the thread to
312 /// other tasks.
313 fn yield_now(&self, task: Notified<Self>) {
314 self.schedule(task);
315 }
316
317 /// Polling the task resulted in a panic. Should the runtime shutdown?
318 fn unhandled_panic(&self) {
319 // By default, do nothing. This maintains the 1.0 behavior.
320 }
321}
322
323cfg_rt! {
324 /// This is the constructor for a new task. Three references to the task are
325 /// created. The first task reference is usually put into an `OwnedTasks`
326 /// immediately. The Notified is sent to the scheduler as an ordinary
327 /// notification.
328 fn new_task<T, S>(
329 task: T,
330 scheduler: S,
331 id: Id,
332 spawned_at: SpawnLocation,
333 ) -> (Task<S>, Notified<S>, JoinHandle<T::Output>)
334 where
335 S: Schedule,
336 T: Future + 'static,
337 T::Output: 'static,
338 {
339 let raw = RawTask::new::<T, S>(
340 task,
341 scheduler,
342 id,
343 spawned_at,
344 );
345 let task = Task {
346 raw,
347 _p: PhantomData,
348 };
349 let notified = Notified(Task {
350 raw,
351 _p: PhantomData,
352 });
353 let join = JoinHandle::new(raw);
354
355 (task, notified, join)
356 }
357
358 /// Creates a new task with an associated join handle. This method is used
359 /// only when the task is not going to be stored in an `OwnedTasks` list.
360 ///
361 /// Currently only blocking tasks use this method.
362 pub(crate) fn unowned<T, S>(
363 task: T,
364 scheduler: S,
365 id: Id,
366 spawned_at: SpawnLocation,
367 ) -> (UnownedTask<S>, JoinHandle<T::Output>)
368 where
369 S: Schedule,
370 T: Send + Future + 'static,
371 T::Output: Send + 'static,
372 {
373 let (task, notified, join) = new_task(
374 task,
375 scheduler,
376 id,
377 spawned_at,
378 );
379
380 // This transfers the ref-count of task and notified into an UnownedTask.
381 // This is valid because an UnownedTask holds two ref-counts.
382 let unowned = UnownedTask {
383 raw: task.raw,
384 _p: PhantomData,
385 };
386 std::mem::forget(task);
387 std::mem::forget(notified);
388
389 (unowned, join)
390 }
391}
392
393impl<S: 'static> Task<S> {
394 unsafe fn new(raw: RawTask) -> Task<S> {
395 Task {
396 raw,
397 _p: PhantomData,
398 }
399 }
400
401 unsafe fn from_raw(ptr: NonNull<Header>) -> Task<S> {
402 Task::new(RawTask::from_raw(ptr))
403 }
404
405 #[cfg(all(
406 tokio_unstable,
407 tokio_taskdump,
408 feature = "rt",
409 target_os = "linux",
410 any(target_arch = "aarch64", target_arch = "x86", target_arch = "x86_64")
411 ))]
412 pub(super) fn as_raw(&self) -> RawTask {
413 self.raw
414 }
415
416 fn header(&self) -> &Header {
417 self.raw.header()
418 }
419
420 fn header_ptr(&self) -> NonNull<Header> {
421 self.raw.header_ptr()
422 }
423
424 /// Returns a [task ID] that uniquely identifies this task relative to other
425 /// currently spawned tasks.
426 ///
427 /// [task ID]: crate::task::Id
428 #[cfg(tokio_unstable)]
429 pub(crate) fn id(&self) -> crate::task::Id {
430 // Safety: The header pointer is valid.
431 unsafe { Header::get_id(self.raw.header_ptr()) }
432 }
433
434 #[cfg(tokio_unstable)]
435 pub(crate) fn spawned_at(&self) -> &'static Location<'static> {
436 // Safety: The header pointer is valid.
437 unsafe { Header::get_spawn_location(self.raw.header_ptr()) }
438 }
439
440 // Explicit `'task` and `'meta` lifetimes are necessary here, as otherwise,
441 // the compiler infers the lifetimes to be the same, and considers the task
442 // to be borrowed for the lifetime of the returned `TaskMeta`.
443 #[cfg(tokio_unstable)]
444 pub(crate) fn task_meta<'task, 'meta>(&'task self) -> crate::runtime::TaskMeta<'meta> {
445 crate::runtime::TaskMeta {
446 id: self.id(),
447 spawned_at: self.spawned_at().into(),
448 _phantom: PhantomData,
449 }
450 }
451
452 cfg_taskdump! {
453 /// Notify the task for task dumping.
454 ///
455 /// Returns `None` if the task has already been notified.
456 pub(super) fn notify_for_tracing(&self) -> Option<Notified<S>> {
457 if self.as_raw().state().transition_to_notified_for_tracing() {
458 // SAFETY: `transition_to_notified_for_tracing` increments the
459 // refcount.
460 Some(unsafe { Notified(Task::new(self.raw)) })
461 } else {
462 None
463 }
464 }
465
466 }
467}
468
469impl<S: 'static> Notified<S> {
470 fn header(&self) -> &Header {
471 self.0.header()
472 }
473
474 #[cfg(tokio_unstable)]
475 #[allow(dead_code)]
476 pub(crate) fn task_id(&self) -> crate::task::Id {
477 self.0.id()
478 }
479}
480
481impl<S: 'static> Notified<S> {
482 pub(crate) unsafe fn from_raw(ptr: RawTask) -> Notified<S> {
483 Notified(Task::new(ptr))
484 }
485}
486
487impl<S: 'static> Notified<S> {
488 pub(crate) fn into_raw(self) -> RawTask {
489 let raw = self.0.raw;
490 mem::forget(self);
491 raw
492 }
493}
494
495impl<S: Schedule> Task<S> {
496 /// Preemptively cancels the task as part of the shutdown process.
497 pub(crate) fn shutdown(self) {
498 let raw = self.raw;
499 mem::forget(self);
500 raw.shutdown();
501 }
502}
503
504impl<S: Schedule> LocalNotified<S> {
505 /// Runs the task.
506 pub(crate) fn run(self) {
507 let raw = self.task.raw;
508 mem::forget(self);
509 raw.poll();
510 }
511}
512
513impl<S: Schedule> UnownedTask<S> {
514 // Used in test of the inject queue.
515 #[cfg(test)]
516 #[cfg_attr(target_family = "wasm", allow(dead_code))]
517 pub(super) fn into_notified(self) -> Notified<S> {
518 Notified(self.into_task())
519 }
520
521 fn into_task(self) -> Task<S> {
522 // Convert into a task.
523 let task = Task {
524 raw: self.raw,
525 _p: PhantomData,
526 };
527 mem::forget(self);
528
529 // Drop a ref-count since an UnownedTask holds two.
530 task.header().state.ref_dec();
531
532 task
533 }
534
535 pub(crate) fn run(self) {
536 let raw = self.raw;
537 mem::forget(self);
538
539 // Transfer one ref-count to a Task object.
540 let task = Task::<S> {
541 raw,
542 _p: PhantomData,
543 };
544
545 // Use the other ref-count to poll the task.
546 raw.poll();
547 // Decrement our extra ref-count
548 drop(task);
549 }
550
551 pub(crate) fn shutdown(self) {
552 self.into_task().shutdown();
553 }
554}
555
556impl<S: 'static> Drop for Task<S> {
557 fn drop(&mut self) {
558 // Decrement the ref count
559 if self.header().state.ref_dec() {
560 // Deallocate if this is the final ref count
561 self.raw.dealloc();
562 }
563 }
564}
565
566impl<S: 'static> Drop for UnownedTask<S> {
567 fn drop(&mut self) {
568 // Decrement the ref count
569 if self.raw.header().state.ref_dec_twice() {
570 // Deallocate if this is the final ref count
571 self.raw.dealloc();
572 }
573 }
574}
575
576impl<S> fmt::Debug for Task<S> {
577 fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
578 write!(fmt, "Task({:p})", self.header())
579 }
580}
581
582impl<S> fmt::Debug for Notified<S> {
583 fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
584 write!(fmt, "task::Notified({:p})", self.0.header())
585 }
586}
587
588/// # Safety
589///
590/// Tasks are pinned.
591unsafe impl<S> linked_list::Link for Task<S> {
592 type Handle = Task<S>;
593 type Target = Header;
594
595 fn as_raw(handle: &Task<S>) -> NonNull<Header> {
596 handle.raw.header_ptr()
597 }
598
599 unsafe fn from_raw(ptr: NonNull<Header>) -> Task<S> {
600 Task::from_raw(ptr)
601 }
602
603 unsafe fn pointers(target: NonNull<Header>) -> NonNull<linked_list::Pointers<Header>> {
604 self::core::Trailer::addr_of_owned(Header::get_trailer(target))
605 }
606}
607
608/// # Safety
609///
610/// The id of a task is never changed after creation of the task, so the return value of
611/// `get_shard_id` will not change. (The cast may throw away the upper 32 bits of the task id, but
612/// the shard id still won't change from call to call.)
613unsafe impl<S> sharded_list::ShardedListItem for Task<S> {
614 unsafe fn get_shard_id(target: NonNull<Self::Target>) -> usize {
615 // SAFETY: The caller guarantees that `target` points at a valid task.
616 let task_id = unsafe { Header::get_id(target) };
617 task_id.0.get() as usize
618 }
619}
620
621/// Wrapper around [`std::panic::Location`] that's conditionally compiled out
622/// when `tokio_unstable` is not enabled.
623#[cfg(tokio_unstable)]
624mod spawn_location {
625
626 use std::panic::Location;
627
628 #[derive(Copy, Clone)]
629 pub(crate) struct SpawnLocation(pub &'static Location<'static>);
630
631 impl From<&'static Location<'static>> for SpawnLocation {
632 fn from(location: &'static Location<'static>) -> Self {
633 Self(location)
634 }
635 }
636}
637
638#[cfg(not(tokio_unstable))]
639mod spawn_location {
640 use std::panic::Location;
641
642 #[derive(Copy, Clone)]
643 pub(crate) struct SpawnLocation();
644
645 impl From<&'static Location<'static>> for SpawnLocation {
646 fn from(_: &'static Location<'static>) -> Self {
647 Self()
648 }
649 }
650
651 #[cfg(test)]
652 #[test]
653 fn spawn_location_is_zero_sized() {
654 assert_eq!(std::mem::size_of::<SpawnLocation>(), 0);
655 }
656}
657
658impl SpawnLocation {
659 #[track_caller]
660 #[inline]
661 pub(crate) fn capture() -> Self {
662 Self::from(Location::caller())
663 }
664}