Skip to main content

Hi Team,

 

 

I am experiencing some weird behavior from fmeserver job submitter. I am running some batch of jobs from master script that triggers child script which is sitting on fme server.I used fmeserverjob submitter.

 

Fme serverjob submitter skips some batches in between.(it takes two or three jobs and 4th one rejects) The feature comes out from rejected port. fme says The reason is ' '(NULL). But the same list/batch(job) when I send again, it runs without any problem. So its clear that there is no data error. I think fme server sometimes not responding to request... or may be busy... in running other jobs... or something like that… But NO PROBLEM with the work space runner. It runs continuously for all the batches(jobs) without any error.

Error msg is

FMEServerJobSubmitter(ServerFactory): http://gisfme-po-a0p:8080 - Failed to submit request to run workspace 'STORM_EGIS_FIBER_HUBTRACE_CHILD.fmw' in repository 'ETL_STORM_EGIS'

FMEServerJobSubmitter(ServerFactory): Reason - ''

 

and the error attribute says:

Attribute(encoded: UTF-8) : `fme_rejection_code' has value `ERROR_OTHER_REASON'

Attribute(encoded: UTF-8) : `fme_rejection_message' has value `Reason - '''

 

 

 

 

The below feature caused the translation to be terminated

Storing feature(s) to FME feature store file `C:\\Users\\chq_coe_pixel1\\Documents\\FME\\Workspaces\\STORM_EGIS_FIBER_HUBTRACE_MASTER_log.ffs'

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Feature Type: `FMEServerJobSubmitter_<REJECTED>_1_wDFJq86e3aI='

Attribute(encoded: UTF-8) : `CATEGORY' has value `NODE'

Attribute(encoded: UTF-8) : `GRP_NUM' has value `100'

Attribute(encoded: UTF-8) : `HUBTRACE_SN' has value `HubTraceGeom'

Attribute(encoded: UTF-8) : `HUBTRACE_URL' has value `https://gisstrm-po-c12p.cable.comcast.com/westdivSTORM/spatialSUITE/v8_2_1/rest/HubTraceGeom'

Attribute(encoded: UTF-8) : `MYLIST' has value `'WDDS38124815','WDDS38126098','WDDS38114806','WDDS38114826','TCBC23952049','WDDS39414848','WDDS38960825','WDDS38960855','WDDS38960845','WDDS38960835','WDDS38960865','WDDS38713115','WDDS38920639','WDDS39278507','WDDS39540493','TCCBTCDS24461485','WDDS42039722','WDDS38936435','WDDS38075428','WDDS33137276','WDDS39728327','WDDS39842262','WDDS40030077','WDDS38920649','WDDS39728317','WDDS39728337','WDDS39728307','WDDS39728297','WDDS38920609','WDDS36552018','WDDS39333894','WDDS39333914','WDDS39333904','WDDS38068406','WDDS38068386','WDDS38068376','WDDS38068366','WDDS38068356','WDDS38068346','WDDS38068396','WDDS42515162','WDDS39414828','WDDS39414858','WDDS39414888','WDDS39414878','WDDS38381294','WDDS38114776','WDDS38126399','WDDS38114786','WDDS38114796''

Attribute(encoded: UTF-8) : `NODE_URL' has value `https://gisstrm-po-c12p.cable.comcast.com/westdivSTORM/spatialSUITE/v8_2_1/rest/ListNodeFibersV2'

Attribute(encoded: UTF-8) : `REGIONS' has value `Twin Cities'

Attribute(encoded: UTF-8) : `SITE_ID' has value `WDDS38124815'

Attribute(encoded: UTF-8) : `SITE_URL' has value `https://gisstrm-po-c12p.cable.comcast.com/westdivSTORM/spatialSUITE/v8_2_1/rest/ListSplSiteFibersV2'

Attribute(encoded: UTF-8) : `SOURCE_CONNECTION' has value `TWINCITIES_DS_DR'

Attribute(encoded: UTF-8) : `SOURCE_SCHEMA' has value `TWINCITIES_DS'

Attribute(encoded: UTF-8) : `_SOURCE_SCHEMA' has value `TWINCITIES_DS'

Attribute(32 bit unsigned integer): `_creation_instance' has value `0'

Attribute(string) : `fme_feature_type' has value `SQLExecutor_4'

Attribute(string) : `fme_geometry' has value `fme_aggregate'

Attribute(encoded: UTF-8) : `fme_rejection_code' has value `ERROR_OTHER_REASON'

Attribute(encoded: UTF-8) : `fme_rejection_message' has value `Reason - '''

Attribute(entangled: string) : `fme_type' has value `fme_no_geom'

entangled to aoracle_type]

Attribute(string) : `oracle_type' has value `oracle_nil'

Coordinate System: `'

Geometry Type: IFMEAggregate

Front Appearance Reference: `<inherited_or_default_appearance>'

Back Appearance Reference: `<inherited_or_default_appearance>'

Number of Geometries: 50

Hi @fkemminje

 

 

When you're noticing jobs failing, are you able to look at the fmeserver log and the engine process monitor log file to see if they contain any more information about why some of the requests might be failing?

 

If nothing conclusive shows up in there, it may be worth increasing the logging level and checking the logs again once another workspace fails to submit jobs properly.

Hi @fkemminje

 

 

When you're noticing jobs failing, are you able to look at the fmeserver log and the engine process monitor log file to see if they contain any more information about why some of the requests might be failing?

 

If nothing conclusive shows up in there, it may be worth increasing the logging level and checking the logs again once another workspace fails to submit jobs properly.

There is no logs for failed jobs, Master script has a log saying "reason NULL". And the surprising part is, when i re -process the same set , it runs and fails for another set in between.

 


Reply