Shape the future of FME with your ideas
Open ideas have been reviewed by our Product Management and are open for commenting and voting.
It would be great if the connector could be used to retrieve file properties such as Version, Description and Status (not just the output attributes that you have when you List ). We would like to use this to populate features that we are extracting from ProjectWise so a user can quickly and easily see what status and or version a file is at, without having to refer back to ProjectWise. Thank youDominique
***Note from Migration:*** Original Title was: Add support for environments in the ProjectWiseWSGConnector to grab custom extended attributes As a ProjectWise user, I'd love to be able to get environments from ProjectWise, such as custom attribute data or other metadata like descriptions, so that I can read in all the information into FME. Right now, I get a common set of attributes but it would be great to read it all! What kinds of environments are you looking to read in Bentley ProjectWise?
On the Run Workspace Page in FME Server there is an optional parameter 'Email results to' which can be used to send out Workspace results if the corresponding Subscription has been configured. When I create a Server App I would like to have this parameter listed as optional to 'Show in App' alongside the workspace published parameters.
It would be nice to have an Email user parameter type that validates an email address provided by the user. I was thinking particularly it would be very handy for using it together with an Emailer within an FME Server App to send notifications to users after a job is run (job finished, download link, report, errors, etc.)
With a lot of relations between small jobs using automations, it would be nice to be able to filter a time range to understand the impact connected/related jobs have with each other (specially in case of failure).Ref : Asked by @ffro in Job Status: sort and filter jobsPS : Amazing work lately on FME Server, keep going guys!
Add the functionality of the Sorter Transformer right into the AttributeManger so while setting up the final table structure the data can be sorted. The AttributeManager already performs functions covered by other transformers like the AttributeRenamer, AttributeRemover and ExpressionEvaluator so why not include the Sorter in this list.
For Security in FME Server , it would be nice to include job queues as items that can be associated to roles. For those of us running distributed engines, this would allow us to control what jobs can go to what engines by FME server role, and we could set up engines with different service accounts, so that they would have different permission on things such as the enterprise file system.
I'd like to have a possibility to enable a specific user to run a job in a specific queue. Now I can select a queue only giving the Engines & Licensing manage permission to a role.A workaround is to set a parameter and call a jobsubmitter.It will be nice to add a permission that can enable the queue choice for the user.
It would be great if we can retrieve the Dynamic Engines credits remaining metric from the FME API.We can somewhat already be informed by "Low Credits" System Event, but it would be great if we can poll these metrics from time to time into our monitor system in order to get better insight in the timeline of this metric.Hopefully this idea is sufficient to be posted :)
On certain occasions there can be a very large spike in submissions of jobs into a given FME Server queue. What would be really nice would be a way to manage/move this submitted glut of jobs to a new queue to free up the original queue again to process new jobs. While there are a lot of creative ways in which queues and priorities can be assigned they can't cover all bases and priority can change based on current server load.Example Scenario:FME Server has several enginesQueue X processes jobs from Repo A and splits them between Engines 1 & 2 in order to handle multiple requestsA large very spike in jobs comes into Queue X (usually submitted by the same user using some automated process)Queue X is now blocked up potentially for hours or even daysNew requests from other users get blocked until block has passedBeing able to move this glut of jobs to a low priority queue POST submission would let new jobs get processed while also letting the backlog work its way through.Ways around this currently include the following (and I'm sure more) but each has their own drawbacks:Design for autoscaling to handle these (really great but complex and often really not needed or even possible in the current IT infrastructure)Assign more engines to the queue until backlog processed (potentially will overwhelm the server and will block up more engines)Design smarter job routing to try and detect larger jobs and give them lower priority or put them into their own queue. (Probably the best option but not always possible).Temporarily increase the priority for newly submitted jobs using job directives Depending on licensing this problem can also become expensive. If using CPU-Based pricing for engines then you might want to move these slower jobs to a standard engine.The way I see it working would be an option in the UI when looking at the list of queued jobs. Check one or more jobs > Options > Move Queue|Set New Priority
Hello all,As I've been using Automations now, to break up more and more processes, the amount of job-logging has increased tremendously. This of course is fine, but it pointed me to an omission in the job-log-filtering options.I have the need to filter on a date/time range.e.g. to see which jobs of an Automations also failed 'yesterday' or to nail down the job which started 5 minutes ago.Scrolling through the list, and visually checking the started and or finished columns is quite tedious when even with all filters in place, the list spans 6 pages.Kind regards,Martin
It would be great to create Subfolders in the Repositorys. So it would be easier to organize many workbenches in one Repository.
It would be great if within your repository you could create sub-repositories. Like Sub folders. I find I end up with so much within one repository and wish I could organise it better. I could create new repository but the workspace is relevant to the repository its in and I don't want to end up with 50 repositories either- it would be handy to be able to drill down through a repository so I can organise better
We have a service which we'd like to publish on an FME Server app - no frills, nothing too special, however, we'd like to be able to restrict the upload file size because we don't want users uploading huge files. As far as I can tell there is no way to restrict a file size in an FME Server app. This would be a great addition. I know I can check the filesize in the workspace by using a path reader but it's a bit hacky and not ideal.
FME Server Apps are great and easy to share, I would love to create a public app to show capability but I don't want the server being clogged with large jobs.An option to limit to the upload size would be really nice to have for public apps. The only way to do it currently is to calculated the uploaded size at runtime. Doable but not as elegant.
It would be fantastic to be able to click on elements on the model and get the attributes information about the object.
***Note from Migration:*** Original Title was: "Reject"port for writers changes to error log behavior with rejected features by writers Currently, when features are rejected when writing a file, there is no way to tell until you open the file. For example, if 51 features are passed by a transformer (and are visible in the inspector or other data formats), but 4 cannot be written to a specific writer (rejected), FME will still report "51 features written, Transformation successful" even though in actuality only 47 features are written. I propose that the FME logger and workspace will say "47 features written, 4 rejected by output file, Transformation successful" Alternatively, the addition of a "reject" port for writers can also be a solution. As there is no "reject" port for writers as there is with transformers, there is no way to tell without scrolling in the error log whether all features are written by a specific writer, until you open exported file in FME Data inspector. This behavior is discussed in this thread with an example. https://knowledge.safe.com/questions/69542/missing-points-when-exporting-to-shapefile.html?childToView=69550#comment-69550
We have transformer presets, how about parameter presets? For example I have a complex choice parameter with a tree structure, I want to create common choice value sets and apply them in future, it will save time and reduce errors if I can save parameter presets.
I hope safe could do is a favor and let use a key to encrypt/decrypt when writing and reading data from a database. The goal is to make the database unreadable by any application that does not have a key. I think both data and column names should be encrypted.
It would be very helpful to have a "Find" function for strings that search for the presence of one or more characters starting with the last character in the string and moving from right to left (i.e. a "Reverse Find"). The particular use case I was trying to address was to search for "" so as to find the last subdirectory in a part in order to get the path of its parent directory using the "Left" function to grab the leftmost characters of my original path up to but not after the leftmost occurrence of "/" in the string.For example @Left('C:acd',@StringLength('C:acd')-@ReverseFindString('C:acd','',-1,caseSensitive=TRUE) = 'C:ac'
The GradedColorizer is super useful, but very limited in the range of color ramps available. To bring it in line with other visualization tools and libraries, it would be great if it provided better color palettes for both aesthetically pleasing and easy-to-read visualizations.
We should be able to set up the order of engine assignment. For example I would like to set up a job queue for high priority workspaces where I assign our Static Engines first and then assign a few dynamic engines. That way if the Static Engines are idle, they can process the jobs and if they are busy, Dynamic Engines can take over the task. Currently the picking order in the job queue is not respected and seems to be Alphabetical so Dynamic Engines are picked first.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK