Here are some answers to your questions:
1. Not all data is generated by server apps that have APIs that a web data connector is suitable for. For example third party ETL tools will use the Hyper API to add native support to write out extracts.
2. Not everyone has a sufficiently high performance analytical database, and the core design of Hyper was about being a high performance analytical database. Tableau chose to limit the functionality that was exposed to us as users when Hyper was released, over time they are making more of that functionality available. And having CRUD operations is really useful, for example to support variations of incremental updates, remove duplicate records, etc. For the Read part of CRUD over the years many users have gone through the effort to build a data source in Tableau and wanted to share that prepared data with other apps and the only way to do that historically was through a CSV download out of a workbook in Tableau Server/Online. Now with Read support they'll be able to share that more directly.
3. Yes, Tableau Desktop & Prep can read CSV. In fact what Tableau does when it reads a CSV (or Excel using the default connector or PDF) is that Tableau loads the data into Hyper, creating what is called a "shadow extract" in Tableau Desktop (I'm not sure whether the same terminology is used for Prep). This takes time each & every time the file is loaded, particularly with larger CSV files, whereas if we're starting with an extract already then Hyper has assorted optimizations to be instantly loaded into memory. If you've ever seen the "for faster startup/performance create an extract" message in Tableau Desktop that's Tableau prompting you to create an extract and avoid that extra startup time. In addition with compression Hyper files can be massively smaller than CSV files which has nice effects on load times, storage requirements, bandwidth, etc.
I primarily use Alteryx as an ETL tool and as of v2019.4 of Alteryx it's using the Hyper API (previously it used the Extract API) to write out extracts, and now with the Hyper API it can read from extracts. However the native integration for write has a couple of limitations (no support for multiple table extracts, and an issue with the vertex ordering of spatial objects that prevents maps in Tableau from properly zooming) so I used the Hyper API to work around those.