Answered by:
SQL percentage column

Hi,
I'm trying to setup a table that will allow my user to apply weightings to a value to calculate how much an item might cost. my table currently is setup as:
ID int, WeightTableId int GroupAId int GroupBId int Weight numeric(20,10) Measuer numeric(20,10) PercentageSplit numeric(20,10)
So in the table the user would add lines like
TableHeaderId GroupAId GroupBId Weight Measure
1 A A 1 8153
1 A B 1 94491
If I do this in excel its 8153+94491 to get the total weight then
A,A Split = 8153/102644 SQL:0.0794290000 Excel:0.079429874
A,B Split = 94491/102644 SQL:0.9205700000 Excel:0.920570123
The value in Excel adds up to 1 or 100% where as the SQL value adds to 0.999999 this is causing my amounts to lose .000001 which is affecting my total. Is there a way I can get the calculation to give me the more actuate value?
this is my current query
UPDATE TableDetail SET ProcessingSplit = (TableDetail.Weight * TableDetail.Measure) / TotalWeight.TotalWeight FROM TableDetail INNER JOIN (SELECT TableHeaderId, SUM(Weight * Measure) AS TotalWeight FROM TableDetail AS TableDetail_1 GROUP BY TableHeaderId) AS TotalWeight ON TableDetail.TableHeaderId = TotalWeight.TableHeaderId WHERE (TotalWeight.TotalWeight <> 0)
 Edited by DavMol2k Tuesday, July 16, 2013 4:01 PM
Question
Answers

Without going into "who is wrong", I observe an unnecessary problem in your design: the numeric columns are oversized. I don't know enough about "Measure" to say with any certainty that you don't need to store values up to 9 billion and need up to 10 decimals, but for "Weight" and "PercentageSplit" I am quite certain that numeric(20,10) is way too big.
The disadvantage of using oversized decimals is that you loose precision when you multiply or divide, as demonstrated in the code below.
SET NOCOUNT ON CREATE TABLE #oversized (weight decimal(20,10) ,measure decimal(20,10) ,percentage_split decimal(20,10) ) INSERT INTO #oversized VALUES (1, 8153, NULL) INSERT INTO #oversized VALUES (1, 94491, NULL) UPDATE #oversized SET percentage_split = (weight * measure) / ( SELECT SUM(weight * measure) FROM #oversized ) SELECT * FROM #oversized DROP TABLE #oversized CREATE TABLE #properlysized (weight decimal(6,3) ,measure decimal(20,10) ,percentage_split decimal(11,10) ) INSERT INTO #properlysized VALUES (1, 8153, NULL) INSERT INTO #properlysized VALUES (1, 94491, NULL) UPDATE #properlysized SET percentage_split = (weight * measure) / ( SELECT SUM(weight * measure) FROM #properlysized ) SELECT * FROM #properlysized DROP TABLE #properlysized SET NOCOUNT OFF
results
weight measure percentage_split    1.0000000000 8153.0000000000 0.0794290000 1.0000000000 94491.0000000000 0.9205700000 weight measure percentage_split    1.000 8153.0000000000 0.0794298741 1.000 94491.0000000000 0.9205701259
GertJan
 Marked as answer by DavMol2k Sunday, July 28, 2013 2:39 AM
All replies

Technically SQL Server is correct, and Excel is wrong. This is the difference between fixed decimal math and floating point math. Excel uses floating point, which is an approximation of the number.
The significant digits makes the correct value 0.999999.

Try this in your subquery:
CONVERT(INT,SUM(Weight * [Measure])
kevine323
Adventures in Database Administration 
You can force SQL to return the same results by casting the numbers to float.
UPDATE TableDetail SET ProcessingSplit = (TableDetail.Weight * TableDetail.Measure) / TotalWeight.TotalWeight FROM tabledetail as tabledetail INNER JOIN (SELECT TableHeaderId, CAST(SUM(Weight * Measure) AS float) AS TotalWeight FROM TableDetail AS TableDetail_1 GROUP BY TableHeaderId) AS TotalWeight ON TableDetail.TableHeaderId = TotalWeight.TableHeaderId WHERE (TotalWeight.TotalWeight <> 0)


You can do the calculation using the float datatype. It will help here, but maybe not always. SQL Server and Excel have different rules for calculations, and don't always produce the same result. But
UPDATE #FooNumeric SET PercentageSplit = (Cast(#FooNumeric.Weight As float) * #FooNumeric.Measure) / TotalWeight.TotalWeight FROM #FooNumeric INNER JOIN (SELECT WeightTableId, SUM(Cast(Weight As float) * Measure) AS TotalWeight FROM #FooNumeric AS TableDetail_1 GROUP BY WeightTableId) AS TotalWeight ON #FooNumeric.WeightTableId = TotalWeight.WeightTableId WHERE (TotalWeight.TotalWeight <> 0)
Tom 
Merge TableDetail as target Using (SELECT WeightTableId, SUM(Cast(Weight As Numeric(20,10)) * Measure) AS TotalWeight FROM TableDetail AS d GROUP BY WeightTableId) AS Source ON Target.WeightTableId = Source.WeightTableId WHEN Matched AND (Source.TotalWeight <> 0) Then Update Set Target.PercentageSplit = (Cast(Target.Weight As Numeric(20,10)) * Target.Measure) / Source.TotalWeight ;

Without going into "who is wrong", I observe an unnecessary problem in your design: the numeric columns are oversized. I don't know enough about "Measure" to say with any certainty that you don't need to store values up to 9 billion and need up to 10 decimals, but for "Weight" and "PercentageSplit" I am quite certain that numeric(20,10) is way too big.
The disadvantage of using oversized decimals is that you loose precision when you multiply or divide, as demonstrated in the code below.
SET NOCOUNT ON CREATE TABLE #oversized (weight decimal(20,10) ,measure decimal(20,10) ,percentage_split decimal(20,10) ) INSERT INTO #oversized VALUES (1, 8153, NULL) INSERT INTO #oversized VALUES (1, 94491, NULL) UPDATE #oversized SET percentage_split = (weight * measure) / ( SELECT SUM(weight * measure) FROM #oversized ) SELECT * FROM #oversized DROP TABLE #oversized CREATE TABLE #properlysized (weight decimal(6,3) ,measure decimal(20,10) ,percentage_split decimal(11,10) ) INSERT INTO #properlysized VALUES (1, 8153, NULL) INSERT INTO #properlysized VALUES (1, 94491, NULL) UPDATE #properlysized SET percentage_split = (weight * measure) / ( SELECT SUM(weight * measure) FROM #properlysized ) SELECT * FROM #properlysized DROP TABLE #properlysized SET NOCOUNT OFF
results
weight measure percentage_split    1.0000000000 8153.0000000000 0.0794290000 1.0000000000 94491.0000000000 0.9205700000 weight measure percentage_split    1.000 8153.0000000000 0.0794298741 1.000 94491.0000000000 0.9205701259
GertJan
 Marked as answer by DavMol2k Sunday, July 28, 2013 2:39 AM