Input data.json
{ "lastUpdateTime" : "2018-07-20T10:56:26.000Z", "items" : [ { "date" : "2018-07-19T21:09:27.000Z", "user" : "dddd", "size" : 5219402, "rawSize" : 15658206, "numFiles" : 119 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "aaaa", "size" : 20524410845, "rawSize" : 61573215663, "numFiles" : 7540 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "wwww", "size" : 0, "rawSize" : 0, "numFiles" : 2 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "qqqq", "size" : 201084, "rawSize" : 603252, "numFiles" : 25 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "ttttt", "size" : 280395332, "rawSize" : 288900666, "numFiles" : 199 } ]
}Expected output
User Size
aaa 121
bbb 123How to do convert JSON to the above table? Please help me.
12 Answers
Well, while I completely agree with @gronostaj about DON'T use awk or sed as parsing tool for JSON, I know sometimes can be cases where you can't use anything else except what is comes with OS.
If you absolutely sure that JSON you posted will be always in the same format as you posted, then solution is below:
#!/bin/sh
data='
{ "lastUpdateTime" : "2018-07-20T10:56:26.000Z", "items" : [ { "date" : "2018-07-19T21:09:27.000Z", "user" : "dddd", "size" : 5219402, "rawSize" : 15658206, "numFiles" : 119 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "aaaa", "size" : 20524410845, "rawSize" : 61573215663, "numFiles" : 7540 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "wwww", "size" : 0, "rawSize" : 0, "numFiles" : 2 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "qqqq", "size" : 201084, "rawSize" : 603252, "numFiles" : 25 }, { "date" : "2018-07-19T21:09:27.000Z", "user" : "ttttt", "size" : 280395332, "rawSize" : 288900666, "numFiles" : 199 } ]
}
'
###########################################################
echo "${data}" | awk -F: 'BEGIN{ printf ("%s\t\t%s\t%s\n","Date", "User", "Size")
}
/lastUpdateTime/ {next}
/date/ { gsub(/\"|,|\s/,""); gsub(/T.+$/,""); printf ("%s\t", $2) }
/user/ { gsub(/\"|,|\s/,""); printf ("%s\t", $2) }
/size/ { gsub(/\"|,|\s/,""); printf ("%s\n", $2) }
' 2 The only honestly correct answer is:
Don't.
awk and sed aren't right tools for the job. You won't be able to properly deal with JSON escaping and encoding. You could try to cover some base cases, but you could as well just use a proper tool: jq.
jq solution
jq '.items[] | "\(.user) \(.size)"' -r /path/to/file(alternatively, you can pipe JSON into the command instead of reading it from file)
To align columns:
jq '.items[] | "\(.user) \(.size)"' -r /path/to/file | column -t 1