There is a straightforward but suboptimal approach that is suitable when there are relatively few types, and is fairly easy to implement. It involves an $N \times N$ matrix of presumed subtyping relationships that are iteratively falsified. At the end, there's a lookup table of every possible pair of types and whether they're compatible or not.
First, collect all the types that appear in the program: this means type declarations, anonymous types, object or record construction sites, and anywhere else that creates a new type. Some of them may be duplicates, but that's ok. Give each of them a number $1..N$.
Create a two-dimensional array of booleans with $N$ rows and $N$ columns. Each cell will represent whether the type of that row is a subtype of the type of that column.
To start with, fill the entire array with trues. We're going to begin by presuming that every type is compatible with every other type, and then repeatedly check each pair of types to find reasons to falsify the assumption until we run out of mismatches.
For each cell that is currently true, call the column type X and the row type Y. Loop over every member of type X, and
- Check whether a member of the same name is in Y. If not, we know that this is not a subtype, so put a false in this cell and move on to the next cell.
- Check whether the number of parameters matches in both members (if applicable). If not, put a false in this cell, and move on to the next one.
- Call the return type (or field type) of X's member P and the return type of Y's member C. Look up the cell in the row of C and the column of P: if it's false, we know that this can't be a subtype either, so put a false in this cell and move on. If it's true, we still think that might be a subtype, so continue.
- Do the same thing for the parameters of the method, if applicable, swapping the row and column you look up.
If nothing has proved that Y is not a subtype of X, the cell will still be true. Do the same for every cell of the array, and then start again from the beginning: there will be more false cells this time and so fewer to look at, and some of the return/parameter types that were presumed compatible last time won't be any more, so there will be more subtype relationships falsified each time. As soon as you find out that a type can't be a subtype of the other, you stop looking at it further and never need to come back to it.
Repeat until you make it all the way through the entire grid without making any changes. At this point, every pair that has been proved not to be a subtype contains false, and the remaining true cells represent genuine subtyping relationships. This array can be used as a lookup table for subtyping wherever it's required in the compiler or interpreter, like in variable assignments.
This algorithm is guaranteed to complete and gives a sound subtyping, but it's doing a lot of work by examining every pair of types, and potentially every method of them, over and over again. For very large numbers of types and very large types this can be quite expensive, but it is cacheable and adding a new type requires only adding one row and column. There are some obvious optimisation steps (e.g. run only the cheap member/arity checks in the first pass), but beyond that more complex approaches are needed for meaningful savings. For a moderate number of types, this approach is fine for many practical use cases.
Any pair of types with no reason to believe that there isn't a subtype relationship will be left marked as true, correctly. In the A { id -> A }
& B { id -> B }
case from the question, nothing will have falsified this relationship so A and B will correctly be considered subtypes of each other (synonyms). If A had additional members, it would still be considered a subtype of B, but not the reverse because a member wouldn't have been found.