The notation 3.2n looks to me like it means 3 x 2n rather than (3 point 2)n.
So the question is whether data lengths should be based on a 6-bit unit or some 'binary' size, in practice 8 bits.
The dominant character size at the time was 6 bits (with 7-bit ASCII just emerging). A 36-bit word length was also common, and was in fact the word size for the successful 709/7090/7094 family.
There were definitely arguments made for keeping a 6-bit size to 'save storage'.
I do not have an authoritative reference to hand, but Wikipedia notes that one influential S/360 feature was:
The 8-bit byte (against financial pressure during development to
reduce the byte to 4 or 6 bits), rather than adopting the 7030 concept
of accessing bytes of variable size at arbitrary bit addresses.
Aha, per Fred Brooks: see page 25
There was one very big difference, and that is Gene’s machine was
based on the six-bit byte and multiples of that so 24-bit
instructions, 48-bit floating part and Jerry’s machine was
based on a 8-bit byte and 32-bit instructions, 64-bit, and 32-bit
floating point, which is not a real happy choice, but… there are
strong arguments each way. And you want your architects to be
consistent. You’re not going to have an 8-bit byte and 48-bit
instruction floating point word. And, so, then came the biggest
internal fight, and that was between the six and eight bit byte, and
that story has been told. Gene and I each quit once that week, quit
the company, and Manny
(Gene is Amdahl, Jerry is Blaauw)
24 is 3 x 23, 48 is 3 x 24.
Futhermore:
The extract in the question comes from a 1964 IBM Journal paper Architecture of the IBM System/360, by Amdahl, Blaauw, and Brooks. The quoted text is on page 91 of the journal, or 4 or 5 pages into the paper.
The extract is immediately followed by a discussion of character sizes (6 versus 8 bits) and floating-point operand sizes (48 versus 32/64 bits). This makes it clear that 3.2n is to be understood as 3 x 2n.
Maybe we can blame the compositor for setting 3.2n rather than 3·2n.