If you are worried about duplicates you can use the DuplicateFilter, tricky part will be to decide which of the duplicates you need to keep.
What kind or errors are you getting? Could you post a logfile snippet with the errors?
Here is the error section of the log.
Unexpected Input Remover Nuker (TeeFactory): Cloned 0 input feature(s) into 0 output feature(s)
dbo.lps_hr_crseunit_staging (MSSQL_ADO_1) Splitter (TeeFactory): Cloned 15 input feature(s) into 15 output feature(s)
dbo.lps_hr_crseunit_staging_MSSQL_ADO_1 Feature Counter -1 4 (TeeFactory): Cloned 15 input feature(s) into 15 output feature(s)
Destination Feature Type Routing Correlator (RoutingFactory): Tested 15 input feature(s), wrote 15 output feature(s): 0 matched merge filters, 15 were routed to output, 0 could not be routed.
Final Output Nuker (TeeFactory): Cloned 0 input feature(s) into 0 output feature(s)
Microsoft SQL Server Non-Spatial Writer: Failed to commit database transaction. Provider error `(2627) Violation of UNIQUE KEY constraint 'UK_lps_hr_crseunit'. Cannot insert duplicate key in object 'dbo.hr_crseunit'. The duplicate key value is (104099 , Oct 7 2019 12:00AM, FME_PM_UPLD2, 2019100500, Colorado Language Arts Society, , 0.53).'
Microsoft SQL Server Non-Spatial Writer: Closing `lps_bp' . Write operation complete
A fatal error has occurred. Check the logfile above for details
... Last line repeated 2 times ...
Translation FAILED with 2 error(s) and 1 warning(s) (15 feature(s) output)
FME Session Duration: 3.0 seconds. (CPU: 0.3s user, 0.2s system)
END - ProcessID: 4868, peak process memory usage: 137692 kB, current process memory usage: 44708 kB
A fatal error has occurred. Check the logfile above for details
Program Terminating
Translation FAILED.
I do have a duplicate filter on the initial file load to filter out dupes based on unique record criteria. However, the table has rows in it already and that is where I want to make sure I don't load dupes. I put constraints on the table to ensure no dupes but was hoping in the fmw somehow I could have it just ignore the dupe errors and load only records without dupes.
FME can do a lot, but not what you are trying, well not this way.
I would read the file and table and join the features (essentially removing duplicates) from the file so that you are left with unique features that are not present in the table.
@tgolden The translation fails despite the Rejected Feature Handling: Continue Translation (found under Workspace Parameters->Translation) because Regected Feature Handling only applies to the <rejected> port on transformers, not to writers.
The database will fail the entire translation when using Bulk Insert: Yes. Try setting Bulk Insert: No on your SQL Server writer. This will be a bit more forgiving, although the load will be a little slower.