This is very odd. It seems to me it would be easier to take a real picture of a walmart and then photoshop in the “Investing in American Jobs” signs (which, by the way, there are too many of and they’re too large to be realistic). But instead we get this ai garbage. Why? Why not just use a real picture? Why have an ai generate an obviously fake walmart checkout aisle, when real pictures of the same are so, so easy to come by?
This is very odd. It seems to me it would be easier to take a real picture of a walmart and then photoshop in the “Investing in American Jobs” signs (which, by the way, there are too many of and they’re too large to be realistic). But instead we get this ai garbage. Why? Why not just use a real picture? Why have an ai generate an obviously fake walmart checkout aisle, when real pictures of the same are so, so easy to come by?
Which is easier. Typing a prompt into a generator or taking the time to learn how to image editing?