I would phrase it like so:
"Of all records that were labelled 1 by the model, 10% were actually 1 (90% incorrect predictions). Of all records that were truly labelled 1 we predicted 83% correctly."
While this is out of context of your question, if support refers to the number of records then it would be beneficial to get a more balanced dataset. The reason your precision is so poor for label 1 is because there are many more "negatives" (0's) than "positives" (1) increasing the chance for false positives to occur, affecting your precision.
EDIT:
This on Cross Validated will help provide more explanation.