well it looks like there is a support issue for a lot of date aggregate function in Databricks/spark with Hadoop Parquet table format.
so the fix/hack that worked are
1. use extract not live connection
2. it makes sense to have a live connection if you are using the spark environment. so the fix was extract years, months and write a formula to calculate quarters and use them. working good how i wanted.
a 3rd option is have separate fields in db itself like quarters.
my fix for now is use option 2. when db fields are created then use them.