- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is it a good idea to used a parallel_scan to parse a text file? For example, suppose you have a file where commands are given in the file to build a model but reading the file the wrong way would change the model. In this case would a parallel_scan be appropriate or would a pipeline be better or should I consider a comiation of both?
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you can automatically synchronise if you land in an unpredictable file location, I can see how you might parse parts of a model on the first pass that you wouldn't have to look at again on the second pass. Have you searched for any publications about parallel parsing that might help you?
(Correction) If you wanted to apply parallel_scan, the second pass wouldn't seem very useful, only the joins, and perhaps some processing after parallel_scan has ended, or even repeated invocations. But I like the idea.
(Correction) If you wanted to apply parallel_scan, the second pass wouldn't seem very useful, only the joins, and perhaps some processing after parallel_scan has ended, or even repeated invocations. But I like the idea.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page