hmm *maybe* the user did enter that as their URL as your file shows only 2. Not sure why they would but......
That would imply everything is workign correctly.
The program scraped links
you extracted just links containing twitter (with some of those illegal due to the user entering them so.
Scrape Twitter - suggestion
- martin@rootjazz
- Site Admin
- Posts: 34712
- Joined: Fri Jan 25, 2013 10:06 pm
- Location: The Funk
- Contact:
Re: Scrape Twitter - suggestion
Yes correct. Now it works fine. Lolmartin@rootjazz wrote:hmm *maybe* the user did enter that as their URL as your file shows only 2. Not sure why they would but......
That would imply everything is workign correctly.
The program scraped links
you extracted just links containing twitter (with some of those illegal due to the user entering them so.
I dont have that ' * SKIP: Exception of type 'LibUtil.ExSkipItem' was thrown. 'error anymore, I dont have any idea why it stopped to give me that error
Another question Martin - as I have them URLs, how can I change them to IDs ?
TwitterDub has option to extract URLs from IDs, but I can't find any way to do it other way
Could you please provide a solution for this? I would like to change every twitter url extracted using SCM, to twitter ID
Thanks!
EDIT: And maybe also feature to export only usernames?
- martin@rootjazz
- Site Admin
- Posts: 34712
- Joined: Fri Jan 25, 2013 10:06 pm
- Location: The Funk
- Contact:
Re: Scrape Twitter - suggestion
It isn't an error, just a log that shouldn't be logged. The program is stating IGNORE THIS ITEM and go back to a certain place in the code. However something between that line and code and where it should go detects it and logs it as an encompassing logBartekef wrote:Yes correct. Now it works fine. Lolmartin@rootjazz wrote:hmm *maybe* the user did enter that as their URL as your file shows only 2. Not sure why they would but......
That would imply everything is workign correctly.
The program scraped links
you extracted just links containing twitter (with some of those illegal due to the user entering them so.
I dont have that ' * SKIP: Exception of type 'LibUtil.ExSkipItem' was thrown. 'error anymore, I dont have any idea why it stopped to give me that error
you cannot. Well you can, not in the program though. But why do you want to? It requires one call per URL to get the IDAnother question Martin - as I have them URLs, how can I change them to IDs ?
The program can work with URLs so there is no need. Basically it is extra work for me for zero benefit for you or me (fromwhat I can tell)
Re: Scrape Twitter - suggestion
martin@rootjazz wrote:The program can work with URLs so there is no need. Basically it is extra work for me for zero benefit for you or me (fromwhat I can tell)
I was looking to make a promoted tweet targeting tailored audience, and I need IDs for that. Do you know any other way I could get IDs from URLs?
The other way is to have usernames instead of IDs
- martin@rootjazz
- Site Admin
- Posts: 34712
- Joined: Fri Jan 25, 2013 10:06 pm
- Location: The Funk
- Contact:
Re: Scrape Twitter - suggestion
try searching google, there may be a service to change urls to IDS
if you want usernames, just search and replace
if you want usernames, just search and replace
Re: Scrape Twitter - suggestion
unfortunately, I didnt find such servicemartin@rootjazz wrote:try searching google, there may be a service to change urls to IDS
if you want usernames, just search and replace
what you mean search and replace? I have .txt files with dozens of URLs. Do you have any idea how, the quickest, I could change them to usernames?
- martin@rootjazz
- Site Admin
- Posts: 34712
- Joined: Fri Jan 25, 2013 10:06 pm
- Location: The Funk
- Contact:
Re: Scrape Twitter - suggestion
if you a list of
https://twitter.com/rootjazz
...
you can search: https://twitter.com/
replace with: <empty>
Not actually the string <empty> but replace it with nothing.
Notepad++ will do it for you if you don't have a text editor
https://twitter.com/rootjazz
...
you can search: https://twitter.com/
replace with: <empty>
Not actually the string <empty> but replace it with nothing.
Notepad++ will do it for you if you don't have a text editor
Re: Scrape Twitter - suggestion
Thank you Martin. Im wondering, could you send me that Soundcloud Manager Guide in pdf?martin@rootjazz wrote:if you a list of
https://twitter.com/rootjazz
...
you can search: https://twitter.com/
replace with: <empty>
Not actually the string <empty> but replace it with nothing.
Notepad++ will do it for you if you don't have a text editor
- martin@rootjazz
- Site Admin
- Posts: 34712
- Joined: Fri Jan 25, 2013 10:06 pm
- Location: The Funk
- Contact:
Re: Scrape Twitter - suggestion
no, I don't have it in PDF format. You might be able to find website to pdf converters online though... maybe
Re: Scrape Twitter - suggestion
I mean that PDF document which was supposed to promote your product, SCM, which was an ebook which actually bringed me here.martin@rootjazz wrote:no, I don't have it in PDF format. You might be able to find website to pdf converters online though... maybe
Anyways.
I don't want to start new post as you continuously help me here. Could you give me an advice? Maybe you will know how to manage this.
I have a lot of twitter URLs scrapped (thanks to your software) and now im following them. Some of people follow me back, some don't. Some I could message directly, some not. Although, I can tweet to anyone.
Do you have an idea how can I:
1) send direct message to those who I can (this step is already easy with TwitterDUB - I will just send messages to my list in txt)
2) then, send a tweet to the rest of people from the same txt file, excluding those I sent DM before (to not duplicate)
And all that without using White list function, because I have thousands of URLs so putting them all to White list manually wouldn't be the most efficient way.
Thanks!