I'm not a SQL Server historian, so I don't know exactly when the global temporary table was first introduced into SQL Server. The feature may have been around since the very genesis of SQL Server. But I know for sure that it has been there since I started dealing with SQL Server in the early 90s.
The odd thing about the global temporary table is that although we understand how it behaves, it has not been easy to find a practical scenario where its use is just perfect. I'd like describe a scenario where I think it is perfect to use a global temporary table. But your opinion may vary, and I'd very much like to hear if you think otherwise or have a more fitting scenario.
Here is the case.
You have a complex query that involves joins between a local temporary table(s) and a remote permanent table(s). In addition, the local temporary table is not small, but nor is it very large. The remote permanent table, on the other hand, is very large. Moreover, the query resultset is relatively small. Finally, this query may be part of a larger stored procedure.
To simplify exposition, let's say the local server is ServerA and the remote server is ServerB. To guarantee performance stability, clearly it's better to process the query on the remote server and send back the resultset. One approach is to enclose the query in a string and send it through openquery to the other server for processing. The problem is that ServerB cannot see the local temporary table on ServerA, and this is where a global temporary table becomes useful. If we turn the local temporary table on ServerA into a global one, we can then pull the data from that global temporary table into a local temporary table on ServerA, perform the join on ServerA with all the data locally stored there, and dump the result into another global temporary table on ServerB. And finally from ServerA we can pull the data from the global temporary table that has the query result.
So why does this approach make sense? First, we are only sending a small amount of data across the servers (e.g. the amount of the data in the local temporary table(s) on ServerA). Second, we are essentially sending the logic from ServerA to ServerB and have the logic processed on ServerB where the large table(s) is. The more complex the query is, the better this approach is. And the larger the table on ServerB is, the better this approach is. It's even better if the query originally joins multiple local temporary tables on ServerA or multiple large tables on ServerB. The problem with joining multiple local temporary tables and large remote tables is that the SQL Server optimizer can get the execution plan terribly wrong, and the chance for getting the plan wrong is not even remote.
As a T-SQL programming pattern, the query processing on ServerB can be turned into a generic stored procedure on ServerB regardless of how the query may look like or how many tables may be involved. You only need to call the proc from ServerA, supply the join clause and the necessary local temporary table name(s) as the parameter, and receive the result from ServerB.
You may argue that the whole design at the higher level should be re-done so that we don't get into the situation where we have to join local temporary tables with large remote tables. That is a compelling argument, but that is quite a different story, and in many real world cases you may not have that choice.
Again, I'd very much like to hear your opinions on this approach, and if you have other scenarios where a global temporary table makes sense, let us know as well.