Are you sure that every feature has a value for both "2016 Assessed Value" and "2015 Assessed Value"?
I just tried the following and it worked as expected:
Thanks for the reply David, no not all features have a value but I thought the @sub function was able to handle empty features? And for the few that "passed" the test why is the result greater than 1000? Thanks again!
,
Thanks for the reply David, no not all features have a value but I thought I read that the @sub function was suppose to handle empty features? And for the few that did pass the test, why is the result greater than 1000? Thanks again!
Thanks for the reply David, no not all features have a value but I thought the @sub function was able to handle empty features? And for the few that "passed" the test why is the result greater than 1000? Thanks again!
,
Thanks for the reply David, no not all features have a value but I thought I read that the @sub function was suppose to handle empty features? And for the few that did pass the test, why is the result greater than 1000? Thanks again!
I couldn't tell without seeing the data. Can you post a minimal workspace with data to reproduce here?
Thanks for the reply David, no not all features have a value but I thought the @sub function was able to handle empty features? And for the few that "passed" the test why is the result greater than 1000? Thanks again!
,
Thanks for the reply David, no not all features have a value but I thought I read that the @sub function was suppose to handle empty features? And for the few that did pass the test, why is the result greater than 1000? Thanks again!
The @sub function returns <null> if the input value is not a numeric, and the Tester determines that <null> is less than any number, with <, <=, >=, > operators. See also the description about the Operators in the help on the Tester.
Here is a workspace and some data. I'm trying to narrow down the list to the features that have a result of 2016 assessment subtract 2015 assessment greater than 1 and less than 1000. I did the calculation in excel and have a column showing that I should only have 3 features pass the test. Instead I have 18 and none of which are the 3 I would expect. Let me know your thoughts.
Thanks!
example-data.xlsxsub-example.fmw
The original values of 2016/2015 Assessed Value are strings containing thousand separator (comma), but FME seems to consider the values as numbers containing decimal places. e.g. "191,400" will be treated as "191.4". The behavior may be reasonable since the comma is used as decimal point in some regions in the world, but I think that it should be documented somewhere in the help.
Try removing commas from the original values before performing math operations.
Here is a workspace and some data. I'm trying to narrow down the list to the features that have a result of 2016 assessment subtract 2015 assessment greater than 1 and less than 1000. I did the calculation in excel and have a column showing that I should only have 3 features pass the test. Instead I have 18 and none of which are the 3 I would expect. Let me know your thoughts.
Thanks!
example-data.xlsxsub-example.fmw
The original values of 2016/2015 Assessed Value are strings containing thousand separator (comma), but FME seems to consider the values as numbers containing decimal places. e.g. "191,400" will be treated as "191.4". The behavior may be reasonable since the comma is used as decimal point in some regions in the world, but I think that it should be documented somewhere in the help.
Try removing commas from the original values before performing math operations.
The original values of 2016/2015 Assessed Value are strings containing thousand separator (comma), but FME seems to consider the values as numbers containing decimal places. e.g. "191,400" will be treated as "191.4". The behavior may be reasonable since the comma is used as decimal point in some regions in the world, but I think that it should be documented somewhere in the help.
Try removing commas from the original values before performing math operations.
That was the answer, thanks takashi!