I'm designing a table which could hold hundreds of thousands rows, as an example:
CREATE TABLE [dbo].[Messages](
[MessageID] [uniqueidentifier] NOT NULL,
[Number] [int] NOT NULL,
[CreationDate] [datetime] NOT NULL,
[Status] [tinyint] NOT NULL,
[Message] [nvarchar](1022) NOT NULL,
[Topic] [smallint] NOT NULL,
[DestinataryID] [int] NULL
)
Almost all of those could be filtered by users at application runtime, destinataryID being the most frequent. I did create indexes, just not everywhere. So, my work now is design a SP to optimize that. I did something like this:
CREATE PROCEDURE GetFilteredData
@Topic INT = NULL,
@DestinataryID INT = NULL,
@CreationDateStart DATETIME = NULL,
@CreationDateEnd DATETIME = NULL,
@Status BIT = NULL
AS
BEGIN
DECLARE @SQL NVARCHAR(MAX);
SET @SQL = 'SELECT ID, Number, CreationDate, Status, Message, Topic, DestinataryID FROM YourTable WHERE 1=1';
IF @DestinataryID IS NOT NULL
SET @SQL += ' AND DestinataryID = @DestinataryID';
IF @Topic IS NOT NULL
SET @SQL += ' AND Topic = @Topic';
-- more ifs
EXEC sp_executesql @SQL, N'@Topic INT, @DestinataryID INT', @Topic, @DestinataryID;
END
My main concern is that it could start using bad execution plans, as it could be good to some requests and awful to others, when multiple possible contexts are being filtered.
Is my thing a regular way of doing these dynamic operations? Should I design a separate SP for each situation? Maybe using hints would help? Thanks in advance