Every good DBA and developer knows that you should not use:
SELECT * FROM [Table]
in your queries to a database, especially if you are not using all the columns returned. Just return the columns you need in the SELECT statement. Using SELECT * can cause extra paging, RFID lookups, unneeded table locking and hinders any future attempts to create a covered index. In summary, it is bad for performance.
However, with SQL Azure there is an additional reason not to use SELECT *, you have to pay for the data transferred. If you are outside the data center (not a Windows Azure application) then you end up paying for the data transferred from your query. If you request columns that you are not using, you will be paying for data to be transferred that you will never use.
So if removing the SELECT * queries have been on your TO-DO list, now is a good time to clean them up before you move to SQL Azure.
It's hard to imagine a non-trivial contemporary business application that would not use an ORM or business object mechanism, such as the Entity Framework, NHibernate, or EntitySpaces. So "Select *" should not be generated. However with an ORM it is common to have "Orders.Where(...).OrderBy(...)" without specifying only the required columns, which means that the resulting Select statement specifies every column anyway. Requiring developers to remember to add ".Select(...)" clauses to their ORM queries may not be realistic, and would mean that in addition to performance profiling, applications are going to need bandwidth profiling. This is probably a good thing in theory, as applications should always be efficient, but in practice it is another issue specific to cloud computing that is not applicable to a locally hosted DB.
Simon: Is "Orders.Where(...).OrderBy(...)" a reference to client side Linq call?