It's not uncommon for networking devices to list the signaling speeds of their ports individually without regard for whether the bandwidth offered by the other ports (or the embedded CPU controlling it all) can successfully sustain data flows at the speeds afforded by the fastest port.
It's more an indication that it can interoperate with other devices using those technologies, and can send single packets at the signaling rate specified, rather than a promise of sustained throughput.
Gigabit Ethernet ports started showing up on motherboards as early as 1999 (probably earlier) even though it would still be years before CPUs could really transfer 1Gbps of data in either direction through that port. I'm sure there are plenty of cheap networking devices on the market today that have Gigabit Ethernet ports yet can't sustain a 1Gbps data flow in either direction.
This situation is not unique to networking equipment either. It's common in all I/O technologies. For example, when SATA-III came out, there were plenty of SATA-III hard drives that couldn't do sustained reads or writes at full SATA-III speeds. They could, however, speak the SATA-III protocol with individual messages sent at SATA-III signaling rates. With USB 3.1 "SuperSpeed+" 10Gbps coming out now, I'm sure you'll be able to find plenty of devices that claim to to be USB 3.1 SuperSpeed+ compliant, but can't sustain 10Gbps data flows.
I'm not trying to excuse or rationalize the confusion this causes, I'm just pointing out that this is business as usual for the high-tech industry. Vendors of powerline Ethernet adaptors are not unique in doing this.