All notable changes to this project will be documented in this file.
- When url is added via
add(_:)
method, and there are available spiders, scrapping starts immediately instead of waiting for previous urls to finish.
Spider
protocol no longer requiresinit(url:)
initializer,url
parameter is passed inrequest(url:completion:)
method instead.URLSessionSpider
now hasinit()
method without parameters.
- Linux support
ScrappableURL.userInfo
property for storing values or objects related to this scrappable URLSwarm.cooldown
property, that allows to customize how cooldown is executed. For example, when using Swarm from Vapor app, which itself uses SwiftNIO for scheduling tasks, cooldown property can be setup in following way:
swarm.cooldown = { interval, closure in
eventLoop.scheduleTask(in: TimeAmount.seconds(Int64(interval)), closure)
}
- Absence of response and data is now treated as failure for network request ( request timeout ), and request is retried
configuration.delayedRequestRetries
times afterconfiguration.delayedRetryDelay
- Action log for previous requests is now properly stored, thus allowing correct number of request retries.
- Initial release