What are overloads good for?

I am currently working on defining overloaded functions’ behavior. I want to be able to overload based on the function’s return type, and there are some very… interesting compilations that arise from that.

In order to resolve them, I’m trying to get a clear answer to one question: What are overloaded functions used for.

In particular, I’m talking about overloads that redefine a function with the same name and the same number of arguments, particularly when defined in different contexts.

The most typical use case for overloading in this case is, e.g., the max function. You are quite expected to have different implementations of max based on user defined types. Fundementally, however, they all do the same thing: accept two objects, and return (a reference to) the bigger of the two. If there is any ambiguity between overloads in such a case, that ambiguity will not result in fundamentally incorrect results. The different overloads pick between different ways of achieving the same end result.

My question is this: are there any other use cases I’m missing here. Is there a reasonable case where picking the “wrong” overload is a matter of having something completely different happening?

My intuition says no, but I’d really like to hear feedback from as many people as I can before I pass judgement.

  • Different implementations for the same semantic operation.
  • Potentially completely different operations that happen to have the same name and number of arguments (please post reasoning in comments)
  • Other (definitely post reasoning in comments)

0 voters

A little background: In practical I’m introducing a concept of expected type back-propagation. In other words, when evaluating the leaf nodes, the compiler often knows what type the result should be. This allows earlier casting to the correct type, and as a result, stricter control over dangerous behavior (mostly narrowing conversions). More details here.

In order for that to have any meaningful effect, this propagation must cross function calls (i.e. - from the return type to the function’s arguments). In other words, the function’s expected result becomes one of the considerations that the compiler takes when selecting the correct overload to pick.

This introduces a very big compilation. Rather than determining type by starting from the leafs and going up, we now have type propagation in both directions (this is somewhat inspired from ML’s type deduction, but there it was an “all or nothing” affair, so simpler problem to solve).

One thing that would make this simpler is if the compiler is allowed to assume that, whichever overload it picks, the correct operation will be carried out. In other words, picking the right overload is a question of getting the types right, not a question of getting the correct operation to take place.

The question is this: is this a valid assumption to make?

I should point out that, should the programmer choose to override that decision, the language provides fairly straightforward tools to do so. This merely affects the default deduction steps.