Test cases have served teams well as a tool to enable measures of quality over time, over many runs of those test cases, so the team knows (or should) roughly whether quality is getting better, worse, or the same.
Run manually, those test cases are manual. They were probably developed as manual test cases anyway, whether or not the person who created them had that in mind, because they were designed for a person to run.
The people doing manual testing are more skilled (and less bored) at exploratory testing. Exploratory is more fun, and good testers are very skilled at it!
If quality automation verifies functional requirements, and the SUT-driving code is self-documenting in a hierarchy that reaches all the way down to the atomic steps (that have no more than one point of failure) then there is no need for people to run test cases manually at all, because what is measured on product behavior is now self-documenting in every detail and available to team members of every role.
The manual testing role is something that everybody on the larger software team should be doing, at least some. With quality automation done well, the hierarchy of steps that describes check behavior at runtime is available to anybody on the team, and they can drill to the desired level of detail to see what is checked exactly and what is not. Whatever is not checked with automation, can be tested manually.
When checks are on their way to cover the implemented functional requirements and are self-documenting, the need for test cases goes away. The team can then do the testing that only people can do over general areas, features, or pages etc., so the people doing it can get creative in covering the area. They don’t have to follow test cases, but they can do more exploratory testing and therefore be more effective at sniffing out bugs.
With MetaAutomation, there is no longer any need for test case creation or management.