Using a relational database model would have avoided this issue. The pain of converting the data and adapting the application is the price to pay for the incorrect initial design.
That said, you can enforce the constraint needed on the existing arrangement using a materialised view:
CREATE TABLE dbo.T
(
Id integer NOT NULL,
ImageId char(1) NOT NULL,
ImageUrl varchar(900) NOT NULL
);
GO
CREATE OR ALTER VIEW dbo.ValidComboT
WITH SCHEMABINDING AS
SELECT
ImageId,
ImageUrl,
NumRows = COUNT_BIG(*)
FROM dbo.T
GROUP BY
ImageId,
ImageUrl;
GO
CREATE UNIQUE CLUSTERED INDEX
[CUQ dbo.ValidComboT ImageId, ImageUrl]
ON dbo.ValidComboT
(ImageId, ImageUrl);
GO
CREATE UNIQUE NONCLUSTERED INDEX
[UQ dbo.ValidComboT ImageId]
ON dbo.ValidComboT
(ImageId);
GO
CREATE UNIQUE NONCLUSTERED INDEX
[UQ dbo.ValidComboT ImageUrl]
ON dbo.ValidComboT
(ImageUrl);
GO
Test:
-- Succeeds
INSERT dbo.T (Id, ImageId, ImageUrl)
VALUES (1, 'a', 'a.com');
INSERT dbo.T (Id, ImageId, ImageUrl)
VALUES (2, 'a', 'a.com');
INSERT dbo.T (Id, ImageId, ImageUrl)
VALUES (3, 'b', 'b.com');
GO
-- Reset
DELETE dbo.T;
-- Fails
INSERT dbo.T (Id, ImageId, ImageUrl)
VALUES (1, 'a', 'a.com');
INSERT dbo.T (Id, ImageId, ImageUrl)
VALUES (2, 'b', 'a.com');
GO
-- Reset
DELETE dbo.T;
-- Fails
INSERT dbo.T (Id, ImageId, ImageUrl)
VALUES (1, 'a', 'a.com');
INSERT dbo.T (Id, ImageId, ImageUrl)
VALUES (2, 'a', 'b.com');
Output:
(3 rows affected)
Msg 2601, Level 14, State 1
Cannot insert duplicate key row in object 'dbo.ValidComboT'
with unique index 'UQ dbo.ValidComboT ImageUrl'.
The duplicate key value is (a.com).
The statement has been terminated.
Msg 2601, Level 14, State 1
Cannot insert duplicate key row in object 'dbo.ValidComboT'
with unique index 'UQ dbo.ValidComboT ImageId'.
The duplicate key value is (a).
The statement has been terminated.
The limit for clustered index key length is 900 bytes, so this will only work if the combination of ImageId and ImageUrl fits within that.
This solution adds a small cost to every data modification that affects the view. There is also a storage requirement for the materialised rows. On the other hand, you may find the information in the view helpful for other queries. The database engine may also be able to make use of it too.
db<>fiddle demo
Considerations
The database engine takes care of all the details needed to keep the materialised view synchronised with the base table.
If you code your own solution (e.g. using a trigger) you take responsibility for covering all the edge cases that can occur under high concurrency. This is not always trivial. Two sessions might check for a conflicting row at the same time, conclude that it does not exist, then both insert on that basis resulting in an undetected duplicate.
On the other hand, the extra safety built into view maintenance comes at a cost to potential concurrency. If many rows in the base table are summarised in a single view row, additional blocking can occur due to this 'lock concentration'. Deadlocks are also a possibility, though correct indexing can mitigate this.
Maintaining an indexed view is typically slightly faster than a trigger because the plan operators necessary to maintain the view are inserted into the plan for the data-changing statement. A trigger requires row versioning, and its logic is executed in a separate SQL context. The cost is similar to executing additional SQL commands.
The materialised view validates all data in the base table when it is created. Triggers only apply to rows added or modified after the triggers are created, and while they remain enabled. Bulk inserts to the table will skip firing triggers by default.
These are all generalisations, and not a complete list of the considerations. You need to test the various approaches in your environment. Or transition to a good design where none of this messing around is required.