Actix Rust Framework for web and wasm based applications

Rust's premier framework for handling large scale web applications across a wide varaitey of operating systems

actix/README.md

Issues

Collection of the latest Issues

prk3

prk3

C-bug
0

Here is code that reproduces the issue: main.rs

Cargo.toml

Expected Behavior

The program does not panic. Nothing should be sent to stdout, as system is dropped before 100 milliseconds pass (interval).

Current Behavior

The program panics and produces the following backtrace on crash:

Possible Solution

I suspect run_interval creates some task that isn't dropped in time or it always assumes tokio runtime exists. I noticed that a short sleep just after staring the actor fixes the issue (commented in the example). Explicit ctx.cancel_future in stopping or stopped method does not fix the problem.

Steps to Reproduce (for bugs)

  1. Call run_interval in actor's started method.
  2. Start actor in actix-rt system.
  3. Quickly drop the system.

Context

We came across this error twice. Once when running a test function with should_panic attribute. It looks like the expected panic triggers drops and the drop of the system causes this panic. Another case was starting and dropping a system in a loop in a fuzz test.

Your Environment

Linux work 5.16.13-200.fc35.x86_64 #1 SMP PREEMPT Tue Mar 8 22:50:58 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

  • Rust Version (I.e, output of rustc -V): rustc 1.58.1 (db9d1b20b 2022-01-20)
  • Actix Version: 0.13.0, 0.12.0
bobs4462

bobs4462

C-bug
0

In implementation of Future trait for actix::contextimpl::ContextFut there's a section which concerns with mailbox processing: https://github.com/actix/actix/blob/8dfab7eca079bec0a83c6d425a37c19da5106565/actix/src/contextimpl.rs#L380-L386

the implementation itself is OK, except for the cases when mailbox is constantly being flooded with new messages: https://github.com/actix/actix/blob/8dfab7eca079bec0a83c6d425a37c19da5106565/actix/src/mailbox.rs#L77-L94 The poll method of Mailbox<A> gets forever stuck in the while loop as AddressReceiver::poll_next will keep producing values (message consumption rate is equal or lower than message production), and as such ContextFut won't be able to make progress on other sub-tasks (including Actor termination, if one is requested)

Expected Behavior

All sub-tasks should have a chance to make progress in one iteration of loop even if mailbox always has new messages to process

Current Behavior

<ContextFut as Future>::poll gets stuck at mailbox processing if message production rate is higher than consumption

Possible Solution

Maybe, introduction of an extra optional parameter, to limit the number of consecutively processed mailbox messages, could've helped, like

Steps to Reproduce

  1. Create a consumer Actor
  2. Create multiple producer Actors
  3. Send messages from producers to consumer (might as well use try_send)
  4. Make sure that production is higher than consumption
  5. Try to terminate consumer within message handling after some time, if mailbox is always at its full capacity, the consumer Actor will never be able to terminate

Context

I was using actix-web-actors, for websocket connection management, and the task was to forward some data from pubsub service to clients via websocket connection. Websocket connection manager is implemented as separate Actor, which gets messages from other Actors in system and forwards them to clients. In case if the message production rate is higher than the websocket connection manager is able to consume them, its mailbox gets full and stays full, as message production never stops (no back-pressure). At the meantime messages get handled by websocket connection actor and are added to queue in WebsocketContext::messages in order to be sent to client: https://github.com/actix/actix-web/blob/fc5ecdc30bb1b964b512686bff3eaf83b7271cf5/actix-web-actors/src/ws.rs#L370-L377 but as mailbox processing never stops, the VecDeque grows without bounds, as there's no opportunity to empty it and send queued messages to client. Eventually as one might guess the application crashes after running out of memory.

  • Rust Version (I.e, output of rustc -V): 1.56.1
  • Actix Version: 0.12.0
  • Actix-web-actors: 4.0.0-beta.10
thalesfragoso

thalesfragoso

C-bug
2

Addr::send will actually check if the Mailbox is full, but the future created by it uses a clone of AddressSender, and due to how AddressSender::clone is implemented, the newly cloned AddressSender will always bypass the Mailbox size check, which will cause MsgRequest to push to the queue on the first poll, independent if the queue is full or not.

AddressSender::clone: https://github.com/actix/actix/blob/8dfab7eca079bec0a83c6d425a37c19da5106565/actix/src/address/channel.rs#L517-L521

Note that it uses an Arc for maybe_parked, but instead of cloning the existing Arc it creates a new one. That Arc seems to never be used.

However, here comes the catch, the docs of channel seem to imply that this is the intended behavior, i.e. clone an AddressSender and you can bypass the queue by one message, but it's counter-intuitive that Addr::send would behave like that, the docs specifically mention a bounded channel.

I think we should change this behavior somehow or update the docs of Addr::send to reflect the current behavior.

Expected Behavior

The future returned by Addr::send (MsgRequest) should hold on to the item if the queue is full.

Current Behavior

MsgRequest always pushes the message to the Mailbox on the first poll, which might flood the actor and block ContextFut indefinitely, given that it tries to process all mailbox messages on a single poll.

Possible Solution

MsgRequest shouldn't have this power to always bypass queue line, we can change the behavior of AddressSender::clone or use another thing to implement MsgRequest. We also need to fix MsgRequest's Future implementation, since it returns Poll::Pending without registering a waker: https://github.com/actix/actix/blob/8dfab7eca079bec0a83c6d425a37c19da5106565/actix/src/address/message.rs#L73-L82

Today this isn't a problem, since it bypasses the queue which never returns SendError::Full.

Steps to Reproduce (for bugs)

The following example will panic due to: https://github.com/actix/actix/blob/8dfab7eca079bec0a83c6d425a37c19da5106565/actix/src/mailbox.rs#L89

Context

The wanted solution is to be able to have a future that will await for the Mailbox to have enough space for the message before enqueuing, which in turn will allow me send several messages without being afraid of blocking ContextFut::poll for too long.

Your Environment

  • Rust Version (I.e, output of rustc -V): 1.55.0
  • Actix Version: 0.12.0
Sytten

Sytten

help wanted
2

Expected Behavior

Being able to do MySyncActor::from_registry() and have it work like async actors.

Current Behavior

Currently the SystemRegistry can only have actors with a Context. Thus you need another system to keep track of the sync actors and their addresses. This is annoying if you rely mostly on the registry but have a few sync actors for database calls for exemple.

Possible Solution

  • One possible solution is to have an actor to keep track of them (suggested on stackoverflow). It is quite elegant but the gist is old and not working anymore. The main disadvantage is you now need to await the send call to first get the address of the sync actor and then await again to send a message to it.
  • My preferred solution would be to add a set method on the SystemRegistry like:

Context

A lot of libraries are sync (diesel is the one we use) and likely won't change anytime soon. Currently the SyncContext and SyncArbiter kind of live in their own world and are not well integrated in the whole ecosystem. I propose we change that!

piegamesde

piegamesde

1

When adding messages to an actor, I noticed that it very often follows some specific pattern, that involves a bit of boilerplate code:

  • Create the method that does something on the impl MyActor
  • Create a struct that represents the arguments of that method and derive actix::Message for it
  • Implement actix::Handler<SomeMessage> for MyActor by having the handle method delegate to the associated method

I have the idea of reducing the boilerplate (and the number of structs) involved by having a proc-macro generate most of it. A first step towards this, which only automates the impl Handler, could look like this:

A second step then would be to somehow extract the function arguments into their own struct. I am sorry that I cannot prototype this myself, as I don't have enough proc macro skills.


Motivation: I am working on a GTK application using woab. GTK uses a lot of signals and handlers, resulting in a lot of boilerplate code. Therefore, woab currently exposes all signals as variants of one message enum. But I think that having a proper message per signal type would be more idiomatic.

matklad

matklad

C-bug
8

Expected Behavior

After System::run returns, all system's actors are stopped and dropped, all Arbiters are joined.

Current Behavior

If, inside the system, additional arbiters are created, they outlive the system.

Reproducible example:

This prints B, A while it should print A, B. Otherwise, it's not really defined when the actor get's dropped and it might happen after main exits, skipping drops

Context

https://github.com/near/nearcore/issues/3925

Your Environment

  • Rust Version (I.e, output of rustc -V): rustc 1.49.0-nightly (91a79fb29 2020-10-07)
  • Actix Version: 0.11.0-beta.1
hoodie

hoodie

0

Expected Behavior

Recipient::do_send() should not return a Result, same as Addr::do_send() does.

Current Behavior

Currently Recipient::do_send() is identical in behavior with Recipient::try_send(), this seems to be an oversight.

Possible Solution

Consume the Result inside the method body, same as Addr::do_send() does: https://github.com/actix/actix/blob/master/src/address/mod.rs#L99

Context

This is a breaking API change, but a consistency improvement, users of Recipient::do_send() presumably already throw away the Result.

CrazyRoka

CrazyRoka

1

I am writing Actix-Web application and using Actix for my service layer. During testing I am enabling logging with this piece of code:

Logs are captured during testing and are printed in failed test stdout. Great! But recently I started using SyncContext in Actix and logs appeared during the tests. I can't disable them. It seems like SyncArbiter is spawning threads that are not configured to print logs to stdout. Do you know how I can disable logs during testing and enable them in failed tests?

I am logging with pretty_env_logger and log crates. Maybe, there is a better solution for doing so? rustc version 1.45.1 actix version 0.9.0 log version 0.4.11 pretty_env_logger version 0.4.0

Firstyear

Firstyear

0

Expected Behavior

SyncArbiters can close where there are active messages in their queues. Normally this behaviour is fine, but for example, a syncarbiter acting as a logging thread, this can cause information to be lost.

Current Behavior

In the context, it should be possible to specify the stop conditions, such as "just stop" or "stop once drained" for example. Normally stop order comes from the msg.connected(), which means that once there are no more connections, it should be possible to drain the queue, and exit the arbiter cleanly.

  • Actix Version: 0.9, but examining the code, this would occur in 1.0.0

I think this could be achieved simply by adding a "is_empty()" to msgs, which returns "state.num_messages == 0", then in sync.rs, the stop condition would be "if connected || !is_empty()"

I can supply a patch if needed.

sg552

sg552

question
2

hi , our project is using Actix, which is really fast, however, when writing tests, I found there are many duplications in our tests code. espcially the "init server" code, e.g.

I know there are many test frameworks such as JUnit, Rspec support before/after hooks, which make the code DIY and clean. In the code above, the JUnit way is to extract the "init service code" to before hook,

I have googled around and found this post: https://medium.com/@ericdreichert/test-setup-and-teardown-in-rust-without-a-framework-ba32d97aa5ab I don't know if this is the official way.

so, how to do before/after hooks in Actix/Rust ?

thanks a lot !

Diggsey

Diggsey

3

I wrote a small crate for using async/await syntax with actix. It also makes the "AtomicResponse" type obsolete by providing a more ergonomic mechanism to achieve the same thing. I think it would be nice to have as part of actix itself.

The basic idea is to use scoped TLS (at least until custom future contexts are supported) to store the actor and its context.

There are three functions:

  • FutureInterop::interop_actor()

    This is equivalent to the into_actor() method except that it populates the TLS slots with the actor and the context before polling the inner future. There's also a boxed variant of this function.

  • with_ctx

    This takes a closure and immediately calls it, passing in mutable references to the actor and context obtained from the TLS slots to that closure.

  • critical_section

    This takes a future/async block and uses ctx.wait() to make sure it has exclusive access to the actor state while it is running. It returns a future itself so you can seamlessly demarcate blocks of code within an async/await block as being critical sections.

Example:

Some things which are very hard to do right now are made much easier in this pattern (for example, when only half of a message handler needs to run atomically). Also, all of the normal futures combinators can be used, rather than the more limited set exposed under actix::fut.

Currently the crate relies on a patch to scoped_thread_local which hasn't been merged yet, which is why I haven't published it. However, except for scoped-tls this crate itself does not require any unsafe code at all.

Jonathas-Conceicao

Jonathas-Conceicao

1

I have been experimenting the current actix's Async Context implementation, Context, and I've come across a characteristic that I don't know if its a bug or the expected behavior. Check the following code:

This exemple naturally stops, since the Actor will have no live Address holders and no futures to process, when stopping the stopping method is called setting the shared data which unlocks the main function.

Now if we change the stopping method to the following:

I would expect that the stopping method wold be called multiple times, and that eventually the Actor would fully stop. What happens in fact is that the Actor's future will be pending when it first returns Running::Continue, but then it won't ever be waken. Running this on gdb makes it easy to see that the Future stops being polled.

I have a simple solution for it which wold be adding a cx.waker().wake_by_ref(); to the Future when it becomes Pending due to Running::Continue returned by the stopping method. That way the Future wold immediately be available to be polled by the runtime again and this example would finish.

What do you guys think? Should the Actor future be waking itself in case of stopping returning a Running::Continue, or should it only be waken by external sources, e.g, new message being received.

david-mcgillicuddy-moixa

david-mcgillicuddy-moixa

2

Currently using SyncArbiter::start with only one thread we have to do some clones that we could otherwise avoid:

It would be nice if this was special cased with a start_once method, or similar. With a FnOnce closure, instead of Fn, I believe you could move variable inside properly. It's even worse with types you can't clone and then you're throwing around Arcs etc.

One additional advantage of this is that we could potentially also signal failure, i.e. allow the inner FnOnce to return a Result.

If there's interest I'm happy to put this together - it should be mostly copy pasting the existing start method.

jpeel

jpeel

0

When Addr is cloned, the AddressSender is cloned. In AddressSender's clone code it clones the inner member variable of the clone target, but maybe_parked is set to false and sender_task is initialized as empty. This bypasses the mailbox checks for the first message sent into the cloned Addr. AddressSender's sender method also returns an AddressSender with similar maybe_parked and sender_task fields.

I suggest that in the clone function for AddressSender, the maybe_parked and sender_task fields are cloned. Perhaps it is better to clone sender_task and then check sender_task for is_parked to set maybe_parked. From what I'm reading that shouldn't cause a problem because these are only used in the AddressSender for determining if it is parked.

In AddressSender's sender method, the fix is more complicated and would entail checking the fullness of the mailbox and setting maybe_parked and the task appropriately.

Below is code that shows the issue written for actix 0.8.3. I can make a 0.9 version if needed, but the part that does the cloning in the master branch of actix still has this problem. If run as it is, the try_send never fails because the Addr is cloned. If the clone invocation at line 90 is commented out, the try_send sometimes fails because the mailbox is full.

Firstyear

Firstyear

1

I have a unix stream socket with actix = "0.9" on MacOS. When it receives a message and writes a response to the FramedWrite, the content is not flushed until the socket is closed.

This can be shown with netcat quite easily. I am attaching a sample program and request.

What happens is the request is sent:

The server sees it, and then attempts to respond

Net cat then shuts down due to no response.

With a rust based client that blocks, the client blocks until the server is ctrl-ced - then the server finally calls flush and the IO is sent.

This can not be reproduced on linux - on linux it appears to behave correctly. It only seems to affect flushing on MacOS.

actix_uds_repro.zip

Jonathas-Conceicao

Jonathas-Conceicao

42

I'm trying implement some asynchronous actor response that uses the Actor's self reference on actix v0.9 but I can't get it to work on the account of some lifetime bounds error. I've separated a small example that contains the error:

But this gives me the following error:

I've tried to implement this using the ActorFuture and the them combinator but I get similar errors, this is what I tried:

The problem seams to be that I can only have static references for the future I return, but since the future will be handled by the Actor itself shouldn't it be able to use it's own reference?

Are there examples for asynchronously handling messages? I have only been able to find the doc example on ActorFuture, witch is even a little bit outdated; but it's a bit of a different context since that doesn't use a reference to the actor on the futures it chains.

therealprof

therealprof

0

Something seems to take out the std::fmt::Display derive (my bet is on derive-more, most probably by use actix::prelude::* because it's actually the only actix use line I have in the code):

If I explicitly try to use std::fmt::Display; I get an additional warning:

Explicitly using the derive_more::Display fixes that but it's yet another dependency I'm not keen on actually having.

markhildreth

markhildreth

2

I'm using Actix 0.7.9. I'm reading this documentation, which says the following:

The Actor's execution state changes to the stopping state in the following situations:

  • Context::stop is called by the actor itself
  • all addresses to the actor get dropped. i.e. no other actor references it.
  • no event objects are registered in the context.

I was slightly confused then when an actor was not being stopped even though the situation described by the second bullet occurred ("all addresses to the actor get dropped"). Through testing, I realized that the reason was because the actor had a run_interval function. If I removed that interval function, it would stop as expected.

I'm not sure if using run_interval in this way is what is meant in the third bullet ("no event objects are registered in the context." However, even in that case the wording is such that I would assume that the actor stops if any of the three situations occur, regardless of the others.

I'm not sure if this is a situation where the behavior is as-expected, and the documentation needs to be touched up, or if there is a bug.

Example Code

Based on the documentation, I would expect that this code stop the child actor almost immediately after stopping the parent actor. Instead, the child actor lives forever.

vincentdephily

vincentdephily

3

This is happening pretty reliably in my code, once enough messages are scheduled :

  • create a timeout handler using self.timeouts.insert(msgid, ctx.notify_later(MsgTimeout{msgid}, now + delay))
  • then ctx.cancel_future(self.timeouts.get(msgid).unwrap()) a few ms later when the reply arrives
  • but I still receive the MsgTimeout message a few secs later

There shouldn't be a race condition, as the timeout arrives many seconds after it was supposedly cancelled. I've triple-checked that I do indeed cancel the right spawnhandle. I've tried to create a reduced testcase that I can share, but didn't succeed. I don't think it's the same as issue #206 as there's no UDP socket involved (there is TCP though).

This is a problem both because of the unexpected timeout that leads to spurious handling, and because the Actor remains alive while waiting for the message.

theronic

theronic

1

This feels like an ordering issue in my code, or a bug in Actix 0.8.3. The following throws an illegal instruction on ARM when called from within fn started(...):

    WsChatServer::from_registry()
        .send(my_message)
        .into_actor(self)
        .then(|id, act, _ctx| {
            info!("Then happened!"); // works on Mac, not on ARM.
            fut::ok(())
        })
        .spawn(ctx);

I reduced it to WsChatServer::from_registry().do_send(my_message), but I get the same problem.

Versions

Find the latest versions by id

broker-v0.4.3 - May 24, 2022

  • Fix at-least-once delivery guarantee for IssueAsync. #536

broker-v0.4.2 - May 23, 2022

  • Add support for actix v0.13.

actix-v0.13.0 - May 23, 2022

Added

  • Implement Clone for WeakRecipient. #518
  • Implement From<Recipient> for WeakRecipient. #518
  • Add Recipient::downgrade method for obtaining a WeakRecipient. #518
  • Extend Sender trait with a downgrade method. #518

Changed

  • Make WeakSender trait public rather than crate-public. #518
  • Updated tokio-util dependency to 0.7. #???
  • Updated minimum supported Rust version to 1.49.

Removed

  • Remove Resolver actor. #451

v0.12.0 - Jun 08, 2021

Added

  • Add fut::try_future::ActorTryFuture. #419
  • Add fut::try_future::ActorTryFutureExt trait with map_ok, map_err and and_then combinator. #419
  • Add fut::future::ActorFutureExt::boxed_local. #493
  • Implemented MessageResponse for Vec<T>. #501

Changed

  • Make Context::new public. #491
  • SinkWriter::write returns Result instead of Option. #499

broker-v0.4.1 - Jun 08, 2021

  • Add support for actix v0.12 (in addition to v0.11).

v0.11.1 - Mar 23, 2021

Fixed

  • Panics caused by instant cancellation of a spawned future. #484

v0.11.0 - Mar 21, 2021

Removed

  • Remove fut::IntoActorFuture trait. #475
  • Remove fut::future::WrapFuture's Output associated type. #475
  • Remove fut::stream::WrapStream's Item associated type. #475
  • Remove prelude::Future re-export from std. #482
  • Remove fut::future::Either re-export. Support for the enum re-exported from futures_util enum still exists. #482
  • Remove fut::future::FutureResult type alias. #482

broker-v0.4.0 - Mar 21, 2021

  • No significant changes from v0.4.0-beta.1.

broker-v0.4.0-beta.1 - Mar 03, 2021

  • Bump actix dependency to v0.11.0-beta.3.

actix-v0.11.0-beta.3 - Mar 03, 2021

Added

  • Added fut::{ActorFutureExt, ActorStreamExt} traits for extension method for ActorFuture and ActorStream trait. This is aiming to have a similar traits set inline with futures crate. #474
  • Added ActorStreamExt::collect method for collect an actor stream's items and output them as an actor future. #474
  • Added ActorStreamExt::take_while method to take an actor stream's items based on the closure output. #474
  • Added ActorStreamExt::skip_while method to skip an actor stream's items based on the closure output. #474
  • Added fut::LocalBoxActorFuture type to keep inline with the futures::future::LocalBoxFuture type. #474

Changed

  • Rework ActorFuture trait. #465
  • fut::{wrap_future, wrap_stream} would need type annotation for Actor type. #465
  • dev::MessageResponse::handle method does not need generic type #472
  • fut::{ok, err, result, FutureResult, Either} are changed to re-export of futures::future::{ready, Ready, Either} types. #474
  • futures::future::{ok, err, ready, Ready, Either} types impls ActorFuture trait by default. #474

Removed

  • Remove dev::ResponseChannel trait #472

v0.11.0-beta.2 - Feb 10, 2021

Changed

  • Update actix-rt to v2.0.0. [#461]
  • Feature resolver is no longer default. [#461]
  • Rename derive feature to macros since it now includes derive and attribute macros. [#461]

v0.11.0-beta.1 - Jan 02, 2021

Added

  • Re-export actix_rt::main macro as actix::main. #448
  • Added actix::fut::Either::{left, right}() variant constructors. #453

Changed

  • The re-exported actix-derive macros are now conditionally included with the derive feature which is enabled by default but can be switched off to reduce dependencies. #424
  • The where clause on Response::fut() was relaxed to no longer require T: Unpin, allowing a Response to be created with an async block #421
  • Allow creating WeakRecipient from WeakAddr, similiar to Recipient from Addr. #432
  • Send SyncArbiter to current System's Arbiter and run it as future there. Enables nested SyncArbiters. #439
  • Use generic type instead of associate type for EnvelopeProxy. #445
  • SyncEnvelopeProxy and SyncContextEnvelope are no longer bound to an Actor. #445
  • Rename actix::clock::{delay_for, delay_until, Delay} to {sleep, sleep_until, Sleep}. #443
  • Remove all Unpin requirement from ActorStream. #443
  • Update examples and tests according to the change of actix-rt. Arbiter::spawn and actix_rt::spawn now panic outside the context of actix::System. They must be called inside System::run, SystemRunner::run or SystemRunner::block_on. More information can be found here. #447
  • actix::fut::Either's internal variants' representation has changed to struct fields. #453
  • Replace pin_project with pin_project_lite #453
  • Update crossbeam-channel to 0.5
  • Update bytes to 1. #443
  • Update tokio to 1. #443
  • Update tokio-util tp 0.6. #443

Fixed

  • Unified MessageResponse impl (combine separate Item/Error type, migrate to Item=Result). #446
  • Fix error for build with --no-default-features flag, add sink feature for futures-util dependency. #427

Removed

  • Remove unnecessary actix::clock::Duration re-export of std::time::Duration. #443

v0.10.0 - Sep 11, 2020

Changed

  • SinkWrite::write calls now send all items correctly using an internal buffer. #384
  • Add Sync bound for Box<dyn Sender> trait object that making Recipient a Send + Sync type. #403
  • Update parking_lot to 0.11 #404
  • Remove unnecessary PhantomData field from Request making it Send + Sync regardless if Request's type-argument is Send or Sync #407

v0.10.0-alpha.3 - May 12, 2020

Changed

  • Update tokio-util dependency to 0.3, FramedWrite trait bound is changed. #365
  • Only poll dropped ContextFut if event loop is running. #374
  • Minimum Rust version is now 1.40 (to be able to use #[cfg(doctest)])

Fixed

  • Fix ActorFuture::poll_next impl for StreamThen to not lose inner future when it's pending. #376

v0.10.0-alpha.2 - Mar 05, 2020

CHANGES

Added

  • New AtomicResponse, a MessageResponse with exclusive poll over actor's reference. #357

Changed

  • Require Pin for ResponseActFuture. #355

v0.10.0-alpha.1 - Feb 25, 2020

CHANGES

Fixed

  • Fix MessageResponse implementation for ResponseFuture to always poll the spawned Future. #317

Added

  • Allow return of any T: 'static on ResponseActFuture. #310

  • Allow return of any T: 'static on ResponseFuture. #343

Changed

  • Feature http was removed. Actix support for http was moved solely to actix-http and actix-web crates. #324

  • Make Pins safe #335 #346 #347

  • Only implement ActorFuture for Box where ActorFuture is Unpin #348

  • Upgrade trust-dns-proto to 0.19 #349

  • Upgrade trust-dns-resolver to 0.19 #349

v0.9.0 - Dec 20, 2019

Changes

[0.9.0] 2019-12-20

Fixed

  • Fix ResolveFuture type signature.

[0.9.0-alpha.2] 2019-12-16

Fixed

  • Fix Resolve actor's panic

[0.9.0-alpha.1] 2019-12-15

Added

  • Added Context::connected() to check any addresses are alive

  • Added fut::ready() future

Changed

  • Migrate to std::future, tokio 0.2 and actix-rt 1.0.0 @bcmcmill #300

  • Upgrade derive_more to 0.99.2

  • Upgrade smallvec to 1.0.0

Fixed

  • Added #[must_use] attribute to ActorFuture and ActorStream

v0.9.0-alpha.1 - Dec 15, 2019

CHANGES

Added

  • Added Context::connected() to check any addresses are alive

  • Added fut::ready() future

Changed

  • Migrate to std::future, tokio 0.2 and actix-rt 1.0.0 @bcmcmill #300

  • Upgrade derive_more to 0.99.2

  • Upgrade smallvec to 1.0.0

Fixed

  • Added #[must_use] attribute to ActorFuture and ActorStream

v0.8.3 - May 29, 2019

Changes

Fixed

  • Stop actor on async context drop

v0.8.2 - May 13, 2019

Changes

Changed

  • Enable http feature by default

  • Upgrade to actix-http 0.2

v0.8.1 - Apr 17, 2019

CHANGES

Added

  • Added std::error::Error impl for SendError

Fixed

  • Fixed concurrent system registry insert #248

v0.8.0 - Apr 15, 2019

CHANGES

Added

  • Added std::error::Error impl for MailboxError

Changed

  • Use trust-dns-resolver 0.11.0

v0.8.0-alpha.3 - Apr 12, 2019

CHANGES

Added

  • Add Actor::start_in_arbiter with semantics of Supervisor::start_in_arbiter.

  • Add ResponseError for ResolverError

  • Add io::SinkWrite

v0.8.0-alpha.2 - Mar 29, 2019

CHANGES

Added

  • Add actix-http error support for MailboxError

v0.8.0-alpha.1 - Mar 28, 2019

CHANGES

Changes

  • Edition 2018

  • Replace System/Arbiter with actix_rt::System and actix_rt::Arbiter

  • Add implementations for Message for Arc and Box

  • System and arbiter registries available via from_registry() method.

Deleted

  • Deleted signals actor

0.7.10 - Jan 16, 2019

Changed

  • Added WeakAddr<A> to weak reference an actor

  • Replace async functions of Response and ActorResponse as future

0.7.9 - Dec 11, 2018

Changes

  • Removed actix module from prelude to make it more friendly for 2018 edition. See rationale in #161

0.7.8 - Dec 05, 2018

Changed

  • Update crossbeam-channel to 0.3 and parking_lot to 0.7 alongside actix-web

0.7.7 - Nov 22, 2018

Changes

Added

  • Impl Into<Recipient<M>> for Addr<A>

v0.7.6 - Nov 08, 2018

CHANGES

Changed

  • Use trust-dns-resolver 0.10.0.

  • Make System::stop_with_code public.

Information - Updated Jun 22, 2022

Stars: 7.2K
Forks: 595
Issues: 38

Repositories & Extras

ears is a simple library to play sounds and music in Rust

ears is a simple library to play sounds and music in MSYS2 according to the instructions

ears is a simple library to play sounds and music in Rust

a tool to analyze file system usage written in Rust

coloured output, according to the LS_COLORS environment variable

a tool to analyze file system usage written in Rust

Scion is a 2D game library made in rust

Please note that this project is in its first milestones and is subject to change according to convenience needs and big features coming

Scion is a 2D game library made in rust

A port of FactorishJS to Wasm/Rust (and a bit of HTML5+JavaScript)

This project is a demonstration that how HTML5 and Rust) can be used to create a game

A port of FactorishJS to Wasm/Rust (and a bit of HTML5+JavaScript)

Small projects to learn Rust

Write a programme which finds the factorial of a number entered by the user

Small projects to learn Rust

This is an action/run 'n' gun RPG, created in Rust using macroquad

This is an action/run 'n' gun RPG, created in Rust using assets/actors

This is an action/run 'n' gun RPG, created in Rust using macroquad

Raytracer on MoonZoon

Raytracer is implemented according to the tutorial MoonZoon is a Rust Fullstack Framework

Raytracer on MoonZoon

Leighton-Micali Hash-Based Signatures

LMS implementation in Rust according to the hash-sigs

Leighton-Micali Hash-Based Signatures

Interfaces, data structures and utilities for dealing with MIDI messages in Rust according to the

If you want to use it in a no_std context, add this instead:

Interfaces, data structures and utilities for dealing with MIDI messages in Rust according to the
Facebook Instagram Twitter GitHub Dribbble
Privacy