Obviously, people worry about this because evolutionary processes have produced one example of general purpose intelligence (i.e. humans). But the kind of evolution in question is very different.
Even creating a single AI program that can play both chess and checkers seems like something that would not be virtually impossible using these methods.
it is totally going to be made on purpose eventually, though, which is why the value alignment problem is important