sql - Tsql - performing a join on a delimited column - performance and optimisation issue -
i have following (slightly simplified in columns returned) query.
select products.product, products.id, products.customers products products.ordercompletedate null
this return, example
producta 1 bob producta 1 jane productb 2 john,dave
note customers can comma delimited list. want add, column 'customer locations', above becomes
producta 1 bob ireland producta 1 jane wales productb 2 john,dave scotland,england
i created function below, fn_split returns single row per delimited item.
create function [dbo].[getlocations] (@customernames varchar(256) ) returns @templocations table (customerlocations varchar(256)) begin declare @namestr varchar(256) declare @temp table(singleloc varchar(256)) insert @temp select customerlocation.location customerlocation inner join customers on customers.id = customerlocation.id inner join dbo.fn_split(@customernames,',') split on split.item = customers.name select @namestr = coalesce(@namestr + ',', '') + singleloc @temp insert @templocations values (@namestr) return end
and applied original query follows
select products.product, products.id, products.customers, locations.customerlocations products outer apply dbo.getlocations(products.customers,',') locations products.ordercompletedate null
however, extremely slow, query taking ~10seconds on table mere 2000 rows (initial query runs instantly). suggests query unable optimised, , being generated row row. stayed away scalar value functions reason, , tried stick table value functions. there glaring fault in logic/code?
i'd suggest creating view, based on unnormalized table, normalization, , use basis future queries. unfortunately, can't identify pk current products table, you'd create view using schemabinding
, , able turn indexed view (indexing on pk + customer name).
querying view (using enterprise edition, or noexpand option) should give comparable performance if normalized table existed.
Comments
Post a Comment