0
$\begingroup$

I wrote a simple implementation of ChaCha20 encryptor for files in C using MbedTLS implementation. The process of encryption is standard - I set key, then for each block of fixed size I generate new nonce (to prevent nonce reuse) and encrypt this block. Then I write nonce to a new file and then append block of encrypted data. So basically it's just a while loop of reading blocks from file and encrypting each of them with key and newly generated nonce.

Here are results of time command when I encrypt 1Gb file:

./main test.dat test.bin  2.67s user 1.04s system 97% cpu 3.803 total

I generate nonce for each block with this method:

mbedtls_ctr_drbg_random(&ctx->ctr_drbg, nonce, NONCE_SIZE);

Where ctx is my structure where I saved ctr_drbg and nonce is just an uint8_t nonce[12]. I am not building entropy each time, I built the generator only once and then just repeated the code above for each block.

Then I call this code:

mbedtls_chacha20_starts(&ctx->ctx, nonce, 0);
mbedtls_chacha20_update(&ctx->ctx, bytes_read, in_buffer, out_buffer);
  • in_buffer - block of data from file.
  • out_buffer - encrypted data for new file.
  • bytes_read - bytes counter.

And repeat the whole process until there is nothing left in the file.

Encryption and decryption works fine. However, my question is quite simple - is it okay speed for ChaCha20? Is there an option to speed it up? Does generating new nonce for every block slows down the execution? If you need my complete code, I might post it. My question is not about the code, it is about the speed, so I guess it's the right place to ask this.

Thanks in advance.

$\endgroup$
4
  • $\begingroup$ opensll speed command is the base comparison for all $\endgroup$
    – kelalaka
    Commented Mar 17 at 7:48
  • $\begingroup$ In speed of file encryption, 1) The hardware matters, in particular media used for the file, CPU. 2) The OS matters 3) Unit for the file size matters. To a person logically interpreting standard scientific conventions, a 1Gb file is read as a "one gigabit file" where it takes 8 bits to make one byte, and giga (G) stands for 9 decimal orders of magnitude. Thus 125000000 bytes. But others will understand 128000000, 131072000, 134217728, 1000000000, 1024000000, 1048576000, or 1073741824 bytes (with the later standing for 1 gibibyte). $\endgroup$
    – fgrieu
    Commented Mar 17 at 8:48
  • $\begingroup$ I don't see any information about how high the IO overhead is. It seems fast enough in my opinion, not as fast as AES performed in hardware, but that's to be expected. But look at how benchmarking needs to be done and you'll discover that there is a lot of questions. To me, there is no need to use a different nonce for each block if you're just going to do encryption without authentication: just calculating the counter is enough. And stream ciphers are easy to parallelize, using mbedtls_chacha20_starts... $\endgroup$
    – Maarten Bodewes
    Commented Mar 18 at 1:43
  • $\begingroup$ Yeah, I found out that without resetting the context you can use the same nonce. I just thought that there will be a problem such as nonce reuse, but I guess it only appears after resetting the context. $\endgroup$
    – Enty AV
    Commented Mar 19 at 8:39

0