Replaces the annoying dual-return (i.e., created `Call` *and* `Result<x>`) with a single `Return<Call/ConnectionInfo>`. Users are now informed via that a `Call` is created -- thus, cleanup in event of connection failure is now their responsibility.
Tested using `cargo make ready`.
Closes#65.
Adds a new field to Config, disposer, an Option<Sender<DisposalMessage>> responsible for dropping the DisposalMessage on a separate thread.
If this is not set, and the Config is passed into manager::Songbird, a thread is spawned for this purpose (which previously was spawned per driver).
If this is not set, and the Config is passed directly into Driver or Call, a thread is spawned locally, which is the current behavior as there is no where to store the Sender.
This disposer is then used in Driver as previously, to run possibly blocking destructors (which should only block the disposal thread). I cannot see this disposal thread getting overloaded, but if it is the DisposalMessages will simply be queued in the flume channel until it can be dropped.
Co-authored-by: Kyle Simpson <kyleandrew.simpson@gmail.com>
This extensive PR rewrites the internal mixing logic of the driver to use symphonia for parsing and decoding audio data, and rubato to resample audio. Existing logic to decode DCA and Opus formats/data have been reworked as plugins for symphonia. The main benefit is that we no longer need to keep yt-dlp and ffmpeg processes alive, saving a lot of memory and CPU: all decoding can be done in Rust! In exchange, we now need to do a lot of the HTTP handling and resumption ourselves, but this is still a huge net positive.
`Input`s have been completely reworked such that all default (non-cached) sources are lazy by default, and are no longer covered by a special-case `Restartable`. These now span a gamut from a `Compose` (lazy), to a live source, to a fully `Parsed` source. As mixing is still sync, this includes adapters for `AsyncRead`/`AsyncSeek`, and HTTP streams.
`Track`s have been reworked so that they only contain initialisation state for each track. `TrackHandles` are only created once a `Track`/`Input` has been handed over to the driver, replacing `create_player` and related functions. `TrackHandle::action` now acts on a `View` of (im)mutable state, and can request seeks/readying via `Action`.
Per-track event handling has also been improved -- we can now determine and propagate the reason behind individual track errors due to the new backend. Some `TrackHandle` commands (seek etc.) benefit from this, and now use internal callbacks to signal completion.
Due to associated PRs on felixmcfelix/songbird from avid testers, this includes general clippy tweaks, API additions, and other repo-wide cleanup. Thanks go out to the below co-authors.
Co-authored-by: Gnome! <45660393+GnomedDev@users.noreply.github.com>
Co-authored-by: Alakh <36898190+alakhpc@users.noreply.github.com>
This handles twilight's migration to a unified `Id` type, which is the only design change needing any handling on our part. All our `From`/`Into`s are covered now, and deprecated type aliases are no longer used.
This was tested using `cargo make ready` and by manually running "examples/twilight".
This PR adds support for twilight v0.8, mainly adapting to significant API changes introduced by v0.7. As a result of these, twilight no longer accepts arbitrary JSON input, so it seemed sensible to adapt our `Shard` design to no longer require the same.
Adding to this, I've added in a trait to allow an arbitrary `Shard` to be installed, given only an implementation of a method to send a `VoiceStateUpdate`. Together, `Sharder::Generic` (songbird::shards::VoiceUpdate) and `Shard::Generic` (songbird::shards::GenericSharder) should allow any library to be hooked in to Songbird.
This PR was tested using `cargo make ready` and by manually testing `examples/twilight`.
Includes two more small changes too small to warrant PRs.
1. Removes the `shard_count` parameter from `Songbird::twilight` & `Songbird::twilight_from_config` since the cluster contains it.
2. Drops the `Arc` wrapper around `Songbird` to match against an upcoming twilight 0.7 change
Adds some additional logging around some critical sections, rarely hit (i.e., during shard reconnections) in pursuit of issue #69. It's strongly suspected to lie here, at any rate...
This change fixes tasks hanging due to rare cases of messages being lost between full Discord reconnections by placing a configurable timeout on the `ConnectionInfo` responses. This is a companion fix to [serenity#1255](https://github.com/serenity-rs/serenity/pull/1255). To make this doable, `Config`s are now used by all versions of `Songbird`/`Call`, and relevant functions are added to simplify setup with configuration. These are now non-exhaustive, correcting an earlier oversight. For future extensibility, this PR moves the return type of `join`/`join_gateway` into a custom future (no longer leaking flume's `RecvFut` type).
Additionally, this fixes the Makefile's feature sets for driver/gateway-only compilation.
This is a breaking change in:
* the return types of `join`/`join_gateway`
* moving `crate::driver::Config` -> `crate::Config`,
* `Config` and `JoinError` becoming `#[non_breaking]`.
This was tested via `cargo make ready`, and by testing `examples/serenity/voice_receive` with various timeout settings.
Joining a channel returns a future which fires on receipt of two messages from discord (by locally storing a channel). However, joining this same channel again after a success returns only *one* such message, causing the command to hang until another join fires or the channel is left. This alters internal behaviour to correctly cancel an in-progress connection attempt, or return success with known data if such a connection is present.
This introduces a breaking change on `Call::update_state` to include the target `ChannelId`. The reason for this is that although the `ChannelId` of a target channel was being stored, server admins may move or kick a bot from its voice channel. This changes the true channel, and may accidentally trigger a "double join" elsewhere.
This fix was tested by using an example to have a bot join its channel twice, to do so in a channel it had been moved to, and to move from a channel it had been moved to.
This change prevents mixer threads from waking every 20ms without an active voice connection. This was leading to unacceptably high CPU usage in cases where users needed to preserve this state between many active connections. Additionally, this modifies the documentation of `Songbird::leave` to emphasise why users would prefer to `remove` their calls.
This was tested by examining the CPU usage in task manager before and after the change was made, using a control of 10k manually created `Driver` instances. After creation is finished, the Drivers no longer saturate a 6-core laptop Intel i7 (while they very much did so before).
Closes#42.
Adds support to the library for tokio 0.2 backward-compatibility. This should hopefully benefit, and prevent lavalink-rs from being blocked on this feature.
These can be reached using, e.g., `gateway-tokio-02`, `driver-tokio-02`, `serenity-rustls-tokio-02`, and `serenity-native-tokio-02` features.
Naturally, this requires some jiggering about with features and the underlying CI, which has been taken care of. Twilight can't be handled in this way, as their last tokio 0.2 version uses the deprecated Discord Gateway v6.
Moves to the faster dashmap in the Songbird management struct, as the final v4 brought back the `entry` API that I was needing to use it safely.
Also handles some new clippy lints.
Potential deadlock (identified by a user) has now been warned about. The way the example is structured prevents this from occurring, but it's worth making this more explicit due to the more free-form nature of twilight.
The design of serenity's event handling and framework should prevent this issue from cropping up when using it as a gateway backend.
* Driver Benchmarks
Benchmarks driver use cases for single packet send,
multiple packet send, float vs opus, and the cost of
head-of-queue track removal.
Mix costs for large packet counts are also included.
This is a prelude to the optimisations discussed in
#21.
* Typo in benchmark
* Place Opus packet directly into packet buffer
Cleans up some other logic surrounding this, too. Gets a 16.9% perf improvement on opus packet passthrough (sub 5us here).
* Better track removal
In theory this should be faster, but it aint. Keeping in case
reducing struct sizes down the line magically makes this
faster.
* Reduce size of Input, TrackHandle
Metadata is now boxed away. Similarly, TrackHandles are neatly Arc'd to reduce their size to pointer length (and mitigate the impact of copies if we add in more fields).
Main goal: a lot of nested future/result folding.
This mainly modifies error handling for Tracks and TrackHandles to be
more consistent, and hides the underlying channel result passing in
get_info. Errors returned should be far clearer, and are domain
specific rather than falling back to a very opaque use of the underlying
channel error. It should be clearer to users why their handle commands
failed, or why they can't make a ytdl track loop or similar.
Also fixed/cleaned up Songbird::join(_gateway) to return in a single
await, sparing the user from the underlying channel details and repeated
Errs. I was trying for some time to extend the same graces to `Call`,
but could not figure out a sane way to get a 'static version of the
first future in the chain (i.e., the gateway send) so that the whole
thing could happen after dropping the lock around the Call. I really
wanted to fix this to happen as a single folded await too, but I think
this might need some crazy hack or redesign.
Far cleaner and more reliable than the old doc-link pattern. Also allowed me to spot some event types and sources which should have been made non_exhaustive.
This implements a proof-of-concept for an improved audio frontend. The largest change is the introduction of events and event handling: both by time elapsed and by track events, such as ending or looping. Following on from this, the library now includes a basic, event-driven track queue system (which people seem to ask for unusually often). A new sample, `examples/13_voice_events`, demonstrates both the `TrackQueue` system and some basic events via the `~queue` and `~play_fade` commands.
Locks are removed from around the control of `Audio` objects, which should allow the backend to be moved to a more granular futures-based backend solution in a cleaner way.